Test Report: KVM_Linux_crio 16143

                    
                      ecdecece8b2b49faa4fa406a3ffa4654981a0212:2024-04-04:33883
                    
                

Test fail (29/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.83
53 TestAddons/StoppedEnableDisable 154.32
145 TestFunctional/parallel/ImageCommands/ImageRemove 2.77
172 TestMultiControlPlane/serial/StopSecondaryNode 142.34
174 TestMultiControlPlane/serial/RestartSecondaryNode 47.38
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 364.95
179 TestMultiControlPlane/serial/StopCluster 142.17
239 TestMultiNode/serial/RestartKeepsNodes 315.12
241 TestMultiNode/serial/StopMultiNode 141.66
248 TestPreload 244.13
256 TestKubernetesUpgrade 356.67
331 TestStartStop/group/old-k8s-version/serial/FirstStart 274.66
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.13
356 TestStartStop/group/embed-certs/serial/Stop 139.09
359 TestStartStop/group/no-preload/serial/Stop 139.17
360 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 108.6
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
370 TestStartStop/group/old-k8s-version/serial/SecondStart 751.29
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.37
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.29
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.52
374 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.47
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 411.94
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 358.59
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 351.5
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 118.45
x
+
TestAddons/parallel/Ingress (153.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-371778 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-371778 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-371778 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [695f5ffb-1ddc-4d3c-876b-41c0e72062f7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [695f5ffb-1ddc-4d3c-876b-41c0e72062f7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004326834s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-371778 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.410172453s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-371778 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.212
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 addons disable ingress-dns --alsologtostderr -v=1: (1.215707785s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 addons disable ingress --alsologtostderr -v=1: (7.78512539s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-371778 -n addons-371778
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 logs -n 25: (1.505960556s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-432080                                                                     | download-only-432080 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| delete  | -p download-only-878755                                                                     | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| delete  | -p download-only-688290                                                                     | download-only-688290 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| delete  | -p download-only-432080                                                                     | download-only-432080 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-012512 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC |                     |
	|         | binary-mirror-012512                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:41613                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-012512                                                                     | binary-mirror-012512 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC |                     |
	|         | addons-371778                                                                               |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC |                     |
	|         | addons-371778                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-371778 --wait=true                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:33 UTC | 04 Apr 24 21:33 UTC |
	|         | addons-371778                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-371778 ssh cat                                                                       | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | /opt/local-path-provisioner/pvc-5be7a3b0-ba74-4929-8411-99662f07185f_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-371778 addons disable                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-371778 addons                                                                        | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-371778 ip                                                                            | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	| addons  | addons-371778 addons disable                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | -p addons-371778                                                                            |                      |         |                |                     |                     |
	| addons  | addons-371778 addons disable                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | addons-371778                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | -p addons-371778                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-371778 addons                                                                        | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-371778 ssh curl -s                                                                   | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | addons-371778 addons                                                                        | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:34 UTC | 04 Apr 24 21:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-371778 ip                                                                            | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:36 UTC | 04 Apr 24 21:36 UTC |
	| addons  | addons-371778 addons disable                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:36 UTC | 04 Apr 24 21:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-371778 addons disable                                                                | addons-371778        | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:36 UTC | 04 Apr 24 21:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:30:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:30:18.683565   13429 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:30:18.683662   13429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:30:18.683670   13429 out.go:304] Setting ErrFile to fd 2...
	I0404 21:30:18.683674   13429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:30:18.683856   13429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:30:18.684480   13429 out.go:298] Setting JSON to false
	I0404 21:30:18.685301   13429 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":764,"bootTime":1712265455,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:30:18.685364   13429 start.go:139] virtualization: kvm guest
	I0404 21:30:18.687694   13429 out.go:177] * [addons-371778] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:30:18.688990   13429 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:30:18.690167   13429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:30:18.689004   13429 notify.go:220] Checking for updates...
	I0404 21:30:18.691659   13429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:30:18.692948   13429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:30:18.694011   13429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:30:18.695039   13429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:30:18.696350   13429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:30:18.728735   13429 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 21:30:18.730022   13429 start.go:297] selected driver: kvm2
	I0404 21:30:18.730035   13429 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:30:18.730047   13429 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:30:18.730771   13429 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:30:18.730847   13429 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:30:18.745947   13429 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:30:18.746019   13429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:30:18.746256   13429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:30:18.746345   13429 cni.go:84] Creating CNI manager for ""
	I0404 21:30:18.746450   13429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:30:18.746470   13429 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 21:30:18.746564   13429 start.go:340] cluster config:
	{Name:addons-371778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-371778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:30:18.746709   13429 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:30:18.750046   13429 out.go:177] * Starting "addons-371778" primary control-plane node in "addons-371778" cluster
	I0404 21:30:18.751861   13429 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:30:18.751934   13429 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:30:18.751944   13429 cache.go:56] Caching tarball of preloaded images
	I0404 21:30:18.752056   13429 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:30:18.752071   13429 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:30:18.752454   13429 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/config.json ...
	I0404 21:30:18.752479   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/config.json: {Name:mk7e5dbcb732cf011cc18b2716b8320ebd45d67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:18.752651   13429 start.go:360] acquireMachinesLock for addons-371778: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:30:18.752718   13429 start.go:364] duration metric: took 39.582µs to acquireMachinesLock for "addons-371778"
	I0404 21:30:18.752748   13429 start.go:93] Provisioning new machine with config: &{Name:addons-371778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-371778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:30:18.752847   13429 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 21:30:18.756211   13429 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0404 21:30:18.756419   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:30:18.756495   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:30:18.771183   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0404 21:30:18.771647   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:30:18.772206   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:30:18.772240   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:30:18.772576   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:30:18.772779   13429 main.go:141] libmachine: (addons-371778) Calling .GetMachineName
	I0404 21:30:18.772927   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:18.773090   13429 start.go:159] libmachine.API.Create for "addons-371778" (driver="kvm2")
	I0404 21:30:18.773117   13429 client.go:168] LocalClient.Create starting
	I0404 21:30:18.773152   13429 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:30:18.848337   13429 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:30:18.920591   13429 main.go:141] libmachine: Running pre-create checks...
	I0404 21:30:18.920614   13429 main.go:141] libmachine: (addons-371778) Calling .PreCreateCheck
	I0404 21:30:18.921122   13429 main.go:141] libmachine: (addons-371778) Calling .GetConfigRaw
	I0404 21:30:18.921541   13429 main.go:141] libmachine: Creating machine...
	I0404 21:30:18.921569   13429 main.go:141] libmachine: (addons-371778) Calling .Create
	I0404 21:30:18.921702   13429 main.go:141] libmachine: (addons-371778) Creating KVM machine...
	I0404 21:30:18.922948   13429 main.go:141] libmachine: (addons-371778) DBG | found existing default KVM network
	I0404 21:30:18.923792   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:18.923658   13451 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0404 21:30:18.923810   13429 main.go:141] libmachine: (addons-371778) DBG | created network xml: 
	I0404 21:30:18.923821   13429 main.go:141] libmachine: (addons-371778) DBG | <network>
	I0404 21:30:18.923826   13429 main.go:141] libmachine: (addons-371778) DBG |   <name>mk-addons-371778</name>
	I0404 21:30:18.923831   13429 main.go:141] libmachine: (addons-371778) DBG |   <dns enable='no'/>
	I0404 21:30:18.923836   13429 main.go:141] libmachine: (addons-371778) DBG |   
	I0404 21:30:18.923842   13429 main.go:141] libmachine: (addons-371778) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 21:30:18.923846   13429 main.go:141] libmachine: (addons-371778) DBG |     <dhcp>
	I0404 21:30:18.923853   13429 main.go:141] libmachine: (addons-371778) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 21:30:18.923857   13429 main.go:141] libmachine: (addons-371778) DBG |     </dhcp>
	I0404 21:30:18.923862   13429 main.go:141] libmachine: (addons-371778) DBG |   </ip>
	I0404 21:30:18.923869   13429 main.go:141] libmachine: (addons-371778) DBG |   
	I0404 21:30:18.923877   13429 main.go:141] libmachine: (addons-371778) DBG | </network>
	I0404 21:30:18.923901   13429 main.go:141] libmachine: (addons-371778) DBG | 
	I0404 21:30:18.929447   13429 main.go:141] libmachine: (addons-371778) DBG | trying to create private KVM network mk-addons-371778 192.168.39.0/24...
	I0404 21:30:18.995159   13429 main.go:141] libmachine: (addons-371778) DBG | private KVM network mk-addons-371778 192.168.39.0/24 created
	I0404 21:30:18.995187   13429 main.go:141] libmachine: (addons-371778) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778 ...
	I0404 21:30:18.995215   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:18.995128   13451 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:30:18.995234   13429 main.go:141] libmachine: (addons-371778) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:30:18.995345   13429 main.go:141] libmachine: (addons-371778) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:30:19.236981   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:19.236822   13451 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa...
	I0404 21:30:19.441143   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:19.440980   13451 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/addons-371778.rawdisk...
	I0404 21:30:19.441175   13429 main.go:141] libmachine: (addons-371778) DBG | Writing magic tar header
	I0404 21:30:19.441188   13429 main.go:141] libmachine: (addons-371778) DBG | Writing SSH key tar header
	I0404 21:30:19.441197   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778 (perms=drwx------)
	I0404 21:30:19.441204   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:19.441094   13451 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778 ...
	I0404 21:30:19.441215   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778
	I0404 21:30:19.441223   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:30:19.441230   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:30:19.441239   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:30:19.441245   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:30:19.441257   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:30:19.441265   13429 main.go:141] libmachine: (addons-371778) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:30:19.441291   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:30:19.441299   13429 main.go:141] libmachine: (addons-371778) Creating domain...
	I0404 21:30:19.441385   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:30:19.441412   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:30:19.441427   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:30:19.441447   13429 main.go:141] libmachine: (addons-371778) DBG | Checking permissions on dir: /home
	I0404 21:30:19.441464   13429 main.go:141] libmachine: (addons-371778) DBG | Skipping /home - not owner
	I0404 21:30:19.442372   13429 main.go:141] libmachine: (addons-371778) define libvirt domain using xml: 
	I0404 21:30:19.442405   13429 main.go:141] libmachine: (addons-371778) <domain type='kvm'>
	I0404 21:30:19.442418   13429 main.go:141] libmachine: (addons-371778)   <name>addons-371778</name>
	I0404 21:30:19.442426   13429 main.go:141] libmachine: (addons-371778)   <memory unit='MiB'>4000</memory>
	I0404 21:30:19.442435   13429 main.go:141] libmachine: (addons-371778)   <vcpu>2</vcpu>
	I0404 21:30:19.442444   13429 main.go:141] libmachine: (addons-371778)   <features>
	I0404 21:30:19.442460   13429 main.go:141] libmachine: (addons-371778)     <acpi/>
	I0404 21:30:19.442476   13429 main.go:141] libmachine: (addons-371778)     <apic/>
	I0404 21:30:19.442483   13429 main.go:141] libmachine: (addons-371778)     <pae/>
	I0404 21:30:19.442492   13429 main.go:141] libmachine: (addons-371778)     
	I0404 21:30:19.442523   13429 main.go:141] libmachine: (addons-371778)   </features>
	I0404 21:30:19.442546   13429 main.go:141] libmachine: (addons-371778)   <cpu mode='host-passthrough'>
	I0404 21:30:19.442558   13429 main.go:141] libmachine: (addons-371778)   
	I0404 21:30:19.442573   13429 main.go:141] libmachine: (addons-371778)   </cpu>
	I0404 21:30:19.442583   13429 main.go:141] libmachine: (addons-371778)   <os>
	I0404 21:30:19.442594   13429 main.go:141] libmachine: (addons-371778)     <type>hvm</type>
	I0404 21:30:19.442607   13429 main.go:141] libmachine: (addons-371778)     <boot dev='cdrom'/>
	I0404 21:30:19.442622   13429 main.go:141] libmachine: (addons-371778)     <boot dev='hd'/>
	I0404 21:30:19.442636   13429 main.go:141] libmachine: (addons-371778)     <bootmenu enable='no'/>
	I0404 21:30:19.442647   13429 main.go:141] libmachine: (addons-371778)   </os>
	I0404 21:30:19.442659   13429 main.go:141] libmachine: (addons-371778)   <devices>
	I0404 21:30:19.442670   13429 main.go:141] libmachine: (addons-371778)     <disk type='file' device='cdrom'>
	I0404 21:30:19.442695   13429 main.go:141] libmachine: (addons-371778)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/boot2docker.iso'/>
	I0404 21:30:19.442719   13429 main.go:141] libmachine: (addons-371778)       <target dev='hdc' bus='scsi'/>
	I0404 21:30:19.442755   13429 main.go:141] libmachine: (addons-371778)       <readonly/>
	I0404 21:30:19.442770   13429 main.go:141] libmachine: (addons-371778)     </disk>
	I0404 21:30:19.442780   13429 main.go:141] libmachine: (addons-371778)     <disk type='file' device='disk'>
	I0404 21:30:19.442795   13429 main.go:141] libmachine: (addons-371778)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:30:19.442813   13429 main.go:141] libmachine: (addons-371778)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/addons-371778.rawdisk'/>
	I0404 21:30:19.442825   13429 main.go:141] libmachine: (addons-371778)       <target dev='hda' bus='virtio'/>
	I0404 21:30:19.442837   13429 main.go:141] libmachine: (addons-371778)     </disk>
	I0404 21:30:19.442850   13429 main.go:141] libmachine: (addons-371778)     <interface type='network'>
	I0404 21:30:19.442877   13429 main.go:141] libmachine: (addons-371778)       <source network='mk-addons-371778'/>
	I0404 21:30:19.442896   13429 main.go:141] libmachine: (addons-371778)       <model type='virtio'/>
	I0404 21:30:19.442920   13429 main.go:141] libmachine: (addons-371778)     </interface>
	I0404 21:30:19.442959   13429 main.go:141] libmachine: (addons-371778)     <interface type='network'>
	I0404 21:30:19.442970   13429 main.go:141] libmachine: (addons-371778)       <source network='default'/>
	I0404 21:30:19.442976   13429 main.go:141] libmachine: (addons-371778)       <model type='virtio'/>
	I0404 21:30:19.442984   13429 main.go:141] libmachine: (addons-371778)     </interface>
	I0404 21:30:19.442990   13429 main.go:141] libmachine: (addons-371778)     <serial type='pty'>
	I0404 21:30:19.442998   13429 main.go:141] libmachine: (addons-371778)       <target port='0'/>
	I0404 21:30:19.443003   13429 main.go:141] libmachine: (addons-371778)     </serial>
	I0404 21:30:19.443010   13429 main.go:141] libmachine: (addons-371778)     <console type='pty'>
	I0404 21:30:19.443017   13429 main.go:141] libmachine: (addons-371778)       <target type='serial' port='0'/>
	I0404 21:30:19.443032   13429 main.go:141] libmachine: (addons-371778)     </console>
	I0404 21:30:19.443050   13429 main.go:141] libmachine: (addons-371778)     <rng model='virtio'>
	I0404 21:30:19.443063   13429 main.go:141] libmachine: (addons-371778)       <backend model='random'>/dev/random</backend>
	I0404 21:30:19.443071   13429 main.go:141] libmachine: (addons-371778)     </rng>
	I0404 21:30:19.443083   13429 main.go:141] libmachine: (addons-371778)     
	I0404 21:30:19.443090   13429 main.go:141] libmachine: (addons-371778)     
	I0404 21:30:19.443100   13429 main.go:141] libmachine: (addons-371778)   </devices>
	I0404 21:30:19.443111   13429 main.go:141] libmachine: (addons-371778) </domain>
	I0404 21:30:19.443127   13429 main.go:141] libmachine: (addons-371778) 
	I0404 21:30:19.449601   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:0e:b1:25 in network default
	I0404 21:30:19.450175   13429 main.go:141] libmachine: (addons-371778) Ensuring networks are active...
	I0404 21:30:19.450195   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:19.450926   13429 main.go:141] libmachine: (addons-371778) Ensuring network default is active
	I0404 21:30:19.451249   13429 main.go:141] libmachine: (addons-371778) Ensuring network mk-addons-371778 is active
	I0404 21:30:19.451777   13429 main.go:141] libmachine: (addons-371778) Getting domain xml...
	I0404 21:30:19.452479   13429 main.go:141] libmachine: (addons-371778) Creating domain...
	I0404 21:30:20.868040   13429 main.go:141] libmachine: (addons-371778) Waiting to get IP...
	I0404 21:30:20.869008   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:20.869419   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:20.869470   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:20.869396   13451 retry.go:31] will retry after 302.830046ms: waiting for machine to come up
	I0404 21:30:21.174317   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:21.174923   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:21.174952   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:21.174835   13451 retry.go:31] will retry after 322.020411ms: waiting for machine to come up
	I0404 21:30:21.498403   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:21.498791   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:21.498825   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:21.498760   13451 retry.go:31] will retry after 401.494645ms: waiting for machine to come up
	I0404 21:30:21.902199   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:21.902540   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:21.902566   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:21.902495   13451 retry.go:31] will retry after 597.64886ms: waiting for machine to come up
	I0404 21:30:22.501236   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:22.501671   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:22.501714   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:22.501636   13451 retry.go:31] will retry after 675.456357ms: waiting for machine to come up
	I0404 21:30:23.178386   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:23.178787   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:23.178829   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:23.178786   13451 retry.go:31] will retry after 652.786279ms: waiting for machine to come up
	I0404 21:30:23.833529   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:23.834106   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:23.834133   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:23.834052   13451 retry.go:31] will retry after 1.010603063s: waiting for machine to come up
	I0404 21:30:24.846273   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:24.846734   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:24.846760   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:24.846703   13451 retry.go:31] will retry after 1.128077555s: waiting for machine to come up
	I0404 21:30:25.977051   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:25.977416   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:25.977435   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:25.977393   13451 retry.go:31] will retry after 1.849702772s: waiting for machine to come up
	I0404 21:30:27.829591   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:27.830163   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:27.830186   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:27.830139   13451 retry.go:31] will retry after 1.829375359s: waiting for machine to come up
	I0404 21:30:29.661081   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:29.661551   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:29.661584   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:29.661457   13451 retry.go:31] will retry after 1.825285946s: waiting for machine to come up
	I0404 21:30:31.489105   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:31.489505   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:31.489539   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:31.489480   13451 retry.go:31] will retry after 3.059746263s: waiting for machine to come up
	I0404 21:30:34.550326   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:34.550732   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:34.550765   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:34.550680   13451 retry.go:31] will retry after 3.0711113s: waiting for machine to come up
	I0404 21:30:37.625830   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:37.626278   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find current IP address of domain addons-371778 in network mk-addons-371778
	I0404 21:30:37.626298   13429 main.go:141] libmachine: (addons-371778) DBG | I0404 21:30:37.626239   13451 retry.go:31] will retry after 5.251973258s: waiting for machine to come up
	I0404 21:30:42.883285   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:42.883694   13429 main.go:141] libmachine: (addons-371778) Found IP for machine: 192.168.39.212
	I0404 21:30:42.883712   13429 main.go:141] libmachine: (addons-371778) Reserving static IP address...
	I0404 21:30:42.883721   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has current primary IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:42.884055   13429 main.go:141] libmachine: (addons-371778) DBG | unable to find host DHCP lease matching {name: "addons-371778", mac: "52:54:00:20:f8:8c", ip: "192.168.39.212"} in network mk-addons-371778
	I0404 21:30:42.961562   13429 main.go:141] libmachine: (addons-371778) DBG | Getting to WaitForSSH function...
	I0404 21:30:42.961589   13429 main.go:141] libmachine: (addons-371778) Reserved static IP address: 192.168.39.212
	I0404 21:30:42.961602   13429 main.go:141] libmachine: (addons-371778) Waiting for SSH to be available...
	I0404 21:30:42.964080   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:42.964550   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:42.964585   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:42.964907   13429 main.go:141] libmachine: (addons-371778) DBG | Using SSH client type: external
	I0404 21:30:42.964939   13429 main.go:141] libmachine: (addons-371778) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa (-rw-------)
	I0404 21:30:42.964966   13429 main.go:141] libmachine: (addons-371778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:30:42.964981   13429 main.go:141] libmachine: (addons-371778) DBG | About to run SSH command:
	I0404 21:30:42.965024   13429 main.go:141] libmachine: (addons-371778) DBG | exit 0
	I0404 21:30:43.100504   13429 main.go:141] libmachine: (addons-371778) DBG | SSH cmd err, output: <nil>: 
	I0404 21:30:43.100798   13429 main.go:141] libmachine: (addons-371778) KVM machine creation complete!
	I0404 21:30:43.101130   13429 main.go:141] libmachine: (addons-371778) Calling .GetConfigRaw
	I0404 21:30:43.101652   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:43.101872   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:43.102101   13429 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:30:43.102112   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:30:43.103337   13429 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:30:43.103350   13429 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:30:43.103355   13429 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:30:43.103360   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.105361   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.105727   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.105769   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.105869   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.106015   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.106171   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.106260   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.106399   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:43.106645   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:43.106662   13429 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:30:43.219554   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:30:43.219591   13429 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:30:43.219604   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.222492   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.222866   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.222890   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.223067   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.223258   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.223430   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.223584   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.223749   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:43.223910   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:43.223922   13429 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:30:43.337508   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:30:43.337581   13429 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:30:43.337591   13429 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:30:43.337602   13429 main.go:141] libmachine: (addons-371778) Calling .GetMachineName
	I0404 21:30:43.337918   13429 buildroot.go:166] provisioning hostname "addons-371778"
	I0404 21:30:43.337947   13429 main.go:141] libmachine: (addons-371778) Calling .GetMachineName
	I0404 21:30:43.338185   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.340947   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.341399   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.341431   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.341615   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.341829   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.341981   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.342090   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.342282   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:43.342474   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:43.342495   13429 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-371778 && echo "addons-371778" | sudo tee /etc/hostname
	I0404 21:30:43.475285   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-371778
	
	I0404 21:30:43.475308   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.478433   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.478798   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.478838   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.479083   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.479317   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.479502   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.479635   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.479813   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:43.480023   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:43.480042   13429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-371778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-371778/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-371778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:30:43.602674   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:30:43.602708   13429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:30:43.602750   13429 buildroot.go:174] setting up certificates
	I0404 21:30:43.602764   13429 provision.go:84] configureAuth start
	I0404 21:30:43.602776   13429 main.go:141] libmachine: (addons-371778) Calling .GetMachineName
	I0404 21:30:43.603050   13429 main.go:141] libmachine: (addons-371778) Calling .GetIP
	I0404 21:30:43.605776   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.606166   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.606199   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.606345   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.608432   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.608819   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.608848   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.608989   13429 provision.go:143] copyHostCerts
	I0404 21:30:43.609052   13429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:30:43.609187   13429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:30:43.609250   13429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:30:43.609309   13429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.addons-371778 san=[127.0.0.1 192.168.39.212 addons-371778 localhost minikube]
	I0404 21:30:43.683363   13429 provision.go:177] copyRemoteCerts
	I0404 21:30:43.683419   13429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:30:43.683443   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.686710   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.687074   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.687111   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.687299   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.687485   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.687650   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.687817   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:30:43.777989   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:30:43.809221   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0404 21:30:43.836611   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 21:30:43.863958   13429 provision.go:87] duration metric: took 261.182989ms to configureAuth
	I0404 21:30:43.863989   13429 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:30:43.864178   13429 config.go:182] Loaded profile config "addons-371778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:30:43.864250   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:43.867254   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.867618   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:43.867651   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:43.867791   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:43.868020   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.868260   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:43.868463   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:43.868638   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:43.868793   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:43.868827   13429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:30:44.183054   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:30:44.183084   13429 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:30:44.183094   13429 main.go:141] libmachine: (addons-371778) Calling .GetURL
	I0404 21:30:44.184759   13429 main.go:141] libmachine: (addons-371778) DBG | Using libvirt version 6000000
	I0404 21:30:44.187041   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.187536   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.187566   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.187708   13429 main.go:141] libmachine: Docker is up and running!
	I0404 21:30:44.187724   13429 main.go:141] libmachine: Reticulating splines...
	I0404 21:30:44.187731   13429 client.go:171] duration metric: took 25.414607511s to LocalClient.Create
	I0404 21:30:44.187753   13429 start.go:167] duration metric: took 25.414664128s to libmachine.API.Create "addons-371778"
	I0404 21:30:44.187787   13429 start.go:293] postStartSetup for "addons-371778" (driver="kvm2")
	I0404 21:30:44.187800   13429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:30:44.187816   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:44.188042   13429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:30:44.188061   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:44.190360   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.190839   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.190856   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.191048   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:44.191248   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:44.191428   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:44.191581   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:30:44.279645   13429 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:30:44.284571   13429 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:30:44.284600   13429 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:30:44.284681   13429 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:30:44.284722   13429 start.go:296] duration metric: took 96.92704ms for postStartSetup
	I0404 21:30:44.284755   13429 main.go:141] libmachine: (addons-371778) Calling .GetConfigRaw
	I0404 21:30:44.285380   13429 main.go:141] libmachine: (addons-371778) Calling .GetIP
	I0404 21:30:44.288010   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.288387   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.288430   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.288625   13429 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/config.json ...
	I0404 21:30:44.288816   13429 start.go:128] duration metric: took 25.535958273s to createHost
	I0404 21:30:44.288840   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:44.291629   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.292159   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.292186   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.292426   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:44.292620   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:44.292768   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:44.292883   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:44.293134   13429 main.go:141] libmachine: Using SSH client type: native
	I0404 21:30:44.293294   13429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0404 21:30:44.293306   13429 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:30:44.409529   13429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712266244.394317662
	
	I0404 21:30:44.409555   13429 fix.go:216] guest clock: 1712266244.394317662
	I0404 21:30:44.409564   13429 fix.go:229] Guest: 2024-04-04 21:30:44.394317662 +0000 UTC Remote: 2024-04-04 21:30:44.288828094 +0000 UTC m=+25.652566188 (delta=105.489568ms)
	I0404 21:30:44.409584   13429 fix.go:200] guest clock delta is within tolerance: 105.489568ms
	I0404 21:30:44.409589   13429 start.go:83] releasing machines lock for "addons-371778", held for 25.65685079s
	I0404 21:30:44.409611   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:44.409881   13429 main.go:141] libmachine: (addons-371778) Calling .GetIP
	I0404 21:30:44.412950   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.413331   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.413365   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.413524   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:44.414174   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:44.414390   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:30:44.414501   13429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:30:44.414552   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:44.414667   13429 ssh_runner.go:195] Run: cat /version.json
	I0404 21:30:44.414697   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:30:44.417461   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.417752   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.417833   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.417860   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.417984   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:44.418009   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:44.418016   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:44.418187   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:30:44.418204   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:44.418364   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:44.418365   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:30:44.418526   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:30:44.418535   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:30:44.418655   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:30:44.533685   13429 ssh_runner.go:195] Run: systemctl --version
	I0404 21:30:44.540270   13429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:30:44.706417   13429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:30:44.712881   13429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:30:44.712963   13429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:30:44.730813   13429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:30:44.730845   13429 start.go:494] detecting cgroup driver to use...
	I0404 21:30:44.730919   13429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:30:44.747821   13429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:30:44.762778   13429 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:30:44.762831   13429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:30:44.777661   13429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:30:44.792720   13429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:30:44.913250   13429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:30:45.076302   13429 docker.go:233] disabling docker service ...
	I0404 21:30:45.076381   13429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:30:45.091646   13429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:30:45.105100   13429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:30:45.253618   13429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:30:45.375826   13429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:30:45.391013   13429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:30:45.410594   13429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:30:45.410658   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.422222   13429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:30:45.422306   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.434025   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.445205   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.456273   13429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:30:45.467847   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.478908   13429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.497540   13429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:30:45.509223   13429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:30:45.519148   13429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:30:45.519212   13429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:30:45.533622   13429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:30:45.543797   13429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:30:45.666165   13429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:30:45.808206   13429 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:30:45.808309   13429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:30:45.813721   13429 start.go:562] Will wait 60s for crictl version
	I0404 21:30:45.813782   13429 ssh_runner.go:195] Run: which crictl
	I0404 21:30:45.817695   13429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:30:45.853800   13429 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:30:45.853957   13429 ssh_runner.go:195] Run: crio --version
	I0404 21:30:45.887046   13429 ssh_runner.go:195] Run: crio --version
	I0404 21:30:45.919067   13429 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:30:45.920629   13429 main.go:141] libmachine: (addons-371778) Calling .GetIP
	I0404 21:30:45.923274   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:45.923713   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:30:45.923731   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:30:45.923985   13429 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:30:45.928273   13429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:30:45.941254   13429 kubeadm.go:877] updating cluster {Name:addons-371778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-371778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 21:30:45.941354   13429 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:30:45.941398   13429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:30:45.980506   13429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 21:30:45.980568   13429 ssh_runner.go:195] Run: which lz4
	I0404 21:30:45.984739   13429 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 21:30:45.989052   13429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 21:30:45.989086   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 21:30:47.472527   13429 crio.go:462] duration metric: took 1.487808837s to copy over tarball
	I0404 21:30:47.472600   13429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 21:30:49.993236   13429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.520605663s)
	I0404 21:30:49.993270   13429 crio.go:469] duration metric: took 2.520713578s to extract the tarball
	I0404 21:30:49.993277   13429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 21:30:50.032259   13429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:30:50.077039   13429 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:30:50.077060   13429 cache_images.go:84] Images are preloaded, skipping loading
	I0404 21:30:50.077069   13429 kubeadm.go:928] updating node { 192.168.39.212 8443 v1.29.3 crio true true} ...
	I0404 21:30:50.077195   13429 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-371778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-371778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:30:50.077287   13429 ssh_runner.go:195] Run: crio config
	I0404 21:30:50.133418   13429 cni.go:84] Creating CNI manager for ""
	I0404 21:30:50.133445   13429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:30:50.133459   13429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 21:30:50.133479   13429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-371778 NodeName:addons-371778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 21:30:50.133650   13429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-371778"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 21:30:50.133720   13429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:30:50.145281   13429 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 21:30:50.145355   13429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 21:30:50.156223   13429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0404 21:30:50.175314   13429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:30:50.194541   13429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0404 21:30:50.213404   13429 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0404 21:30:50.217686   13429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:30:50.232259   13429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:30:50.375859   13429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:30:50.395237   13429 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778 for IP: 192.168.39.212
	I0404 21:30:50.395266   13429 certs.go:194] generating shared ca certs ...
	I0404 21:30:50.395287   13429 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.395463   13429 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:30:50.561875   13429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt ...
	I0404 21:30:50.561907   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt: {Name:mk47caa56324e78e6ea515afea1a88db59a433eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.562105   13429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key ...
	I0404 21:30:50.562120   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key: {Name:mk825c07701965d37187ac001608b77944e0f4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.562216   13429 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:30:50.734555   13429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt ...
	I0404 21:30:50.734587   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt: {Name:mka3f04905193525453be0bc95aa0ccc3b168c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.734758   13429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key ...
	I0404 21:30:50.734773   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key: {Name:mk5ec97a798bb71d435958a2c788e8fd40a9b768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.734865   13429 certs.go:256] generating profile certs ...
	I0404 21:30:50.734933   13429 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.key
	I0404 21:30:50.734953   13429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt with IP's: []
	I0404 21:30:50.804762   13429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt ...
	I0404 21:30:50.804799   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: {Name:mk3a53c812b04a7baa38638ac75b739f1f28dc3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.805006   13429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.key ...
	I0404 21:30:50.805022   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.key: {Name:mk8c78e001989fc8d0915ac5b104b614acf8a3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.805146   13429 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key.d753da70
	I0404 21:30:50.805180   13429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt.d753da70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212]
	I0404 21:30:50.874793   13429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt.d753da70 ...
	I0404 21:30:50.874836   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt.d753da70: {Name:mk516346c5a654264ae8394e54935abb01dfea3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.875031   13429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key.d753da70 ...
	I0404 21:30:50.875050   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key.d753da70: {Name:mk084303e95fdd58e0dbacd6d4e87d0ccad4f889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:50.875158   13429 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt.d753da70 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt
	I0404 21:30:50.875286   13429 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key.d753da70 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key
	I0404 21:30:50.875380   13429 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.key
	I0404 21:30:50.875410   13429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.crt with IP's: []
	I0404 21:30:51.004906   13429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.crt ...
	I0404 21:30:51.004944   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.crt: {Name:mkbaddb4c690d46822fc2c307c627e752d2046a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:51.005153   13429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.key ...
	I0404 21:30:51.005171   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.key: {Name:mk3a57c4cdea98e5065173d4db55f69c7c617052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:30:51.005385   13429 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:30:51.005446   13429 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:30:51.005516   13429 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:30:51.005562   13429 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:30:51.006216   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:30:51.037653   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:30:51.065766   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:30:51.101682   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:30:51.131706   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0404 21:30:51.163967   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 21:30:51.192544   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:30:51.219772   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:30:51.246757   13429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:30:51.273978   13429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 21:30:51.292560   13429 ssh_runner.go:195] Run: openssl version
	I0404 21:30:51.299538   13429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:30:51.311431   13429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:30:51.316685   13429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:30:51.316738   13429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:30:51.322698   13429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:30:51.334763   13429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:30:51.339322   13429 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:30:51.339382   13429 kubeadm.go:391] StartCluster: {Name:addons-371778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-371778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:30:51.339456   13429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 21:30:51.339502   13429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 21:30:51.379381   13429 cri.go:89] found id: ""
	I0404 21:30:51.379463   13429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 21:30:51.390690   13429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 21:30:51.401632   13429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 21:30:51.412174   13429 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 21:30:51.412206   13429 kubeadm.go:156] found existing configuration files:
	
	I0404 21:30:51.412258   13429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 21:30:51.422367   13429 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 21:30:51.422430   13429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 21:30:51.433220   13429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 21:30:51.444609   13429 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 21:30:51.444677   13429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 21:30:51.455433   13429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 21:30:51.466288   13429 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 21:30:51.466352   13429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 21:30:51.476879   13429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 21:30:51.486806   13429 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 21:30:51.486866   13429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 21:30:51.497628   13429 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 21:30:51.550170   13429 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 21:30:51.550245   13429 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 21:30:51.678590   13429 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 21:30:51.678769   13429 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 21:30:51.678893   13429 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 21:30:51.892764   13429 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 21:30:52.106052   13429 out.go:204]   - Generating certificates and keys ...
	I0404 21:30:52.106188   13429 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 21:30:52.106269   13429 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 21:30:52.136227   13429 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0404 21:30:52.470703   13429 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0404 21:30:52.674745   13429 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0404 21:30:52.862445   13429 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0404 21:30:53.085720   13429 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0404 21:30:53.085964   13429 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-371778 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0404 21:30:53.197508   13429 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0404 21:30:53.197710   13429 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-371778 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0404 21:30:53.452840   13429 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0404 21:30:53.940677   13429 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0404 21:30:54.179681   13429 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0404 21:30:54.179879   13429 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 21:30:54.287413   13429 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 21:30:54.382187   13429 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 21:30:54.647449   13429 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 21:30:54.931927   13429 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 21:30:55.110975   13429 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 21:30:55.111570   13429 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 21:30:55.115605   13429 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 21:30:55.117542   13429 out.go:204]   - Booting up control plane ...
	I0404 21:30:55.117650   13429 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 21:30:55.117733   13429 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 21:30:55.117817   13429 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 21:30:55.135355   13429 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 21:30:55.135724   13429 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 21:30:55.135792   13429 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 21:30:55.272568   13429 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 21:31:01.776194   13429 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503866 seconds
	I0404 21:31:01.798075   13429 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 21:31:01.821674   13429 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 21:31:02.351861   13429 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 21:31:02.352114   13429 kubeadm.go:309] [mark-control-plane] Marking the node addons-371778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 21:31:02.877226   13429 kubeadm.go:309] [bootstrap-token] Using token: utmcce.vie0y1a7qr5wp0sl
	I0404 21:31:02.879012   13429 out.go:204]   - Configuring RBAC rules ...
	I0404 21:31:02.879139   13429 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 21:31:02.890144   13429 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 21:31:02.913753   13429 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 21:31:02.928008   13429 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 21:31:02.937710   13429 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 21:31:02.941677   13429 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 21:31:02.957713   13429 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 21:31:03.218375   13429 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 21:31:03.297237   13429 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 21:31:03.297261   13429 kubeadm.go:309] 
	I0404 21:31:03.297323   13429 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 21:31:03.297364   13429 kubeadm.go:309] 
	I0404 21:31:03.297471   13429 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 21:31:03.297484   13429 kubeadm.go:309] 
	I0404 21:31:03.297519   13429 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 21:31:03.297603   13429 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 21:31:03.297678   13429 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 21:31:03.297696   13429 kubeadm.go:309] 
	I0404 21:31:03.297832   13429 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 21:31:03.297854   13429 kubeadm.go:309] 
	I0404 21:31:03.297916   13429 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 21:31:03.297934   13429 kubeadm.go:309] 
	I0404 21:31:03.298002   13429 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 21:31:03.298112   13429 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 21:31:03.298174   13429 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 21:31:03.298180   13429 kubeadm.go:309] 
	I0404 21:31:03.298294   13429 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 21:31:03.298417   13429 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 21:31:03.298429   13429 kubeadm.go:309] 
	I0404 21:31:03.298540   13429 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token utmcce.vie0y1a7qr5wp0sl \
	I0404 21:31:03.298693   13429 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 21:31:03.298727   13429 kubeadm.go:309] 	--control-plane 
	I0404 21:31:03.298737   13429 kubeadm.go:309] 
	I0404 21:31:03.298870   13429 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 21:31:03.298896   13429 kubeadm.go:309] 
	I0404 21:31:03.298999   13429 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token utmcce.vie0y1a7qr5wp0sl \
	I0404 21:31:03.299087   13429 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 21:31:03.299951   13429 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 21:31:03.299987   13429 cni.go:84] Creating CNI manager for ""
	I0404 21:31:03.299999   13429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:31:03.302188   13429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 21:31:03.303924   13429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 21:31:03.339531   13429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 21:31:03.388066   13429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 21:31:03.388186   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:03.388226   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-371778 minikube.k8s.io/updated_at=2024_04_04T21_31_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=addons-371778 minikube.k8s.io/primary=true
	I0404 21:31:03.476940   13429 ops.go:34] apiserver oom_adj: -16
	I0404 21:31:03.588368   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:04.089159   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:04.589254   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:05.089075   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:05.589175   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:06.089427   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:06.589423   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:07.089152   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:07.589060   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:08.088780   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:08.589202   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:09.088912   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:09.589115   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:10.088870   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:10.588556   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:11.088660   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:11.588560   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:12.088842   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:12.589072   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:13.088473   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:13.589334   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:14.089042   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:14.588500   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:15.089367   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:15.589420   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:16.088539   13429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:31:16.210999   13429 kubeadm.go:1107] duration metric: took 12.822889627s to wait for elevateKubeSystemPrivileges
	W0404 21:31:16.211046   13429 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 21:31:16.211058   13429 kubeadm.go:393] duration metric: took 24.871678161s to StartCluster
	I0404 21:31:16.211082   13429 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:31:16.211242   13429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:31:16.212411   13429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:31:16.212653   13429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0404 21:31:16.212687   13429 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:31:16.214850   13429 out.go:177] * Verifying Kubernetes components...
	I0404 21:31:16.212808   13429 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0404 21:31:16.212997   13429 config.go:182] Loaded profile config "addons-371778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:31:16.216387   13429 addons.go:69] Setting gcp-auth=true in profile "addons-371778"
	I0404 21:31:16.216406   13429 addons.go:69] Setting yakd=true in profile "addons-371778"
	I0404 21:31:16.216419   13429 mustload.go:65] Loading cluster: addons-371778
	I0404 21:31:16.216426   13429 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-371778"
	I0404 21:31:16.216443   13429 addons.go:69] Setting ingress=true in profile "addons-371778"
	I0404 21:31:16.216447   13429 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-371778"
	I0404 21:31:16.216445   13429 addons.go:69] Setting cloud-spanner=true in profile "addons-371778"
	I0404 21:31:16.216476   13429 addons.go:234] Setting addon ingress=true in "addons-371778"
	I0404 21:31:16.216489   13429 addons.go:234] Setting addon cloud-spanner=true in "addons-371778"
	I0404 21:31:16.216495   13429 addons.go:69] Setting ingress-dns=true in profile "addons-371778"
	I0404 21:31:16.216499   13429 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-371778"
	I0404 21:31:16.216519   13429 addons.go:234] Setting addon ingress-dns=true in "addons-371778"
	I0404 21:31:16.216522   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.216433   13429 addons.go:234] Setting addon yakd=true in "addons-371778"
	I0404 21:31:16.216534   13429 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-371778"
	I0404 21:31:16.216534   13429 addons.go:69] Setting storage-provisioner=true in profile "addons-371778"
	I0404 21:31:16.216552   13429 addons.go:69] Setting default-storageclass=true in profile "addons-371778"
	I0404 21:31:16.216564   13429 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-371778"
	I0404 21:31:16.216568   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.216582   13429 addons.go:234] Setting addon storage-provisioner=true in "addons-371778"
	I0404 21:31:16.216603   13429 config.go:182] Loaded profile config "addons-371778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:31:16.216577   13429 addons.go:69] Setting inspektor-gadget=true in profile "addons-371778"
	I0404 21:31:16.216653   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.216659   13429 addons.go:234] Setting addon inspektor-gadget=true in "addons-371778"
	I0404 21:31:16.216638   13429 addons.go:69] Setting volumesnapshots=true in profile "addons-371778"
	I0404 21:31:16.216706   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.216718   13429 addons.go:234] Setting addon volumesnapshots=true in "addons-371778"
	I0404 21:31:16.216785   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.216522   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.217054   13429 addons.go:69] Setting registry=true in profile "addons-371778"
	I0404 21:31:16.217063   13429 addons.go:69] Setting metrics-server=true in profile "addons-371778"
	I0404 21:31:16.217081   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217091   13429 addons.go:234] Setting addon metrics-server=true in "addons-371778"
	I0404 21:31:16.216490   13429 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-371778"
	I0404 21:31:16.217107   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217122   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217135   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217158   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217187   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217191   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217040   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.216522   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.217091   13429 addons.go:234] Setting addon registry=true in "addons-371778"
	I0404 21:31:16.216547   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.217223   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217250   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217270   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217110   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217358   13429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-371778"
	I0404 21:31:16.217052   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.216432   13429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:31:16.217415   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217434   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.216437   13429 addons.go:69] Setting helm-tiller=true in profile "addons-371778"
	I0404 21:31:16.217114   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.217482   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217482   13429 addons.go:234] Setting addon helm-tiller=true in "addons-371778"
	I0404 21:31:16.217115   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.217592   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217618   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217676   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.217702   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.217742   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.218070   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.218406   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.218433   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.239740   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0404 21:31:16.240249   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.240830   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.240858   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.241232   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.241873   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.241926   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.242256   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0404 21:31:16.242658   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.243184   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.243206   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.243534   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.244075   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.244165   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.246602   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I0404 21:31:16.247271   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.247989   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.248007   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.248586   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.249225   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.249264   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.249505   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0404 21:31:16.254129   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0404 21:31:16.254386   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.254388   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.254421   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.254729   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.254752   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.254730   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.254845   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.255015   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.255106   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.255746   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.255762   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.256109   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.256488   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.256849   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.256883   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.256935   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.256950   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.257329   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.257730   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.262762   13429 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-371778"
	I0404 21:31:16.262808   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.263191   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.263217   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.266164   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0404 21:31:16.266874   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.267960   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.267978   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.268419   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.269013   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.269057   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.274232   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0404 21:31:16.274263   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0404 21:31:16.274669   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.274773   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.275656   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.275676   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.275825   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.275836   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.275988   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.276166   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.276474   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.277060   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.277096   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.277997   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.280836   13429 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0404 21:31:16.279606   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0404 21:31:16.280517   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0404 21:31:16.281750   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0404 21:31:16.282063   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0404 21:31:16.282712   13429 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0404 21:31:16.282727   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0404 21:31:16.282746   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.283540   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.283887   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.284413   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.284429   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.284498   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.284710   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.284726   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.284883   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.285170   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.285184   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.285237   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.285552   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.286094   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.286382   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.287017   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.287155   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.287094   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.287116   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.287397   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.287530   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.287541   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.287616   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.287976   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.288006   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.288355   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.288359   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.288420   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.288506   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.288573   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.289401   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0404 21:31:16.289728   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.289784   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.289861   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.290260   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.290314   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.290334   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.292927   13429 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0404 21:31:16.290842   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.291091   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.293987   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0404 21:31:16.294462   13429 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0404 21:31:16.294473   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0404 21:31:16.294492   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.294717   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.295189   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.295203   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.295537   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.295624   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.295662   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.296342   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.297051   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.298184   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.298672   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.298699   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.316151   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.321056   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0404 21:31:16.321150   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0404 21:31:16.321300   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0404 21:31:16.321434   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0404 21:31:16.321071   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0404 21:31:16.321104   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0404 21:31:16.321683   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.322108   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.322127   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.322215   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.322219   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.323024   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.323111   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.323133   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.323186   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.323242   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.323253   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.323329   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0404 21:31:16.323495   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.323531   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.323544   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.324449   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.324557   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.324577   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.324648   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.324716   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.324730   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.324739   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.324779   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.324805   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.324827   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.324955   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.325136   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.325150   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.325213   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.325224   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.325571   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.325639   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.325678   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.325681   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.325716   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.326074   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.326825   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.326991   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.327024   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.327536   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.328032   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.330242   13429 out.go:177]   - Using image docker.io/registry:2.8.3
	I0404 21:31:16.329284   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.329795   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.331504   13429 addons.go:234] Setting addon default-storageclass=true in "addons-371778"
	I0404 21:31:16.331545   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.336288   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.337485   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:16.337884   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.337939   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.339808   13429 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0404 21:31:16.339862   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I0404 21:31:16.341211   13429 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0404 21:31:16.341225   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0404 21:31:16.341243   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.342515   13429 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0404 21:31:16.344006   13429 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0404 21:31:16.344036   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0404 21:31:16.344059   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.342550   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.345466   13429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0404 21:31:16.346890   13429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0404 21:31:16.345547   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.340705   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39741
	I0404 21:31:16.344109   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0404 21:31:16.344628   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.345064   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0404 21:31:16.346135   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.346404   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0404 21:31:16.347015   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.347548   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.348070   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.348697   13429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 21:31:16.348720   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.348755   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.348997   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.349156   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.350193   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0404 21:31:16.350299   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.350457   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.350619   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.351146   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0404 21:31:16.353132   13429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0404 21:31:16.351675   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.351918   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.351951   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.352041   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.352051   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.352161   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.352387   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.352671   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.352836   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.354755   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.354813   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0404 21:31:16.354823   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0404 21:31:16.354840   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.354861   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.354895   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.354922   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.354926   13429 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0404 21:31:16.354940   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0404 21:31:16.354944   13429 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:31:16.354952   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 21:31:16.354957   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.354962   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.354965   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.355663   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.355666   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.355688   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.355665   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.355716   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.355846   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.355859   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.355911   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.356683   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.356985   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.357163   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.357240   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.358797   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.360878   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0404 21:31:16.362367   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0404 21:31:16.361144   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.361173   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.360412   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.363377   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.363625   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.363651   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.363391   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.364843   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0404 21:31:16.363399   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.363418   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.363532   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0404 21:31:16.363553   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0404 21:31:16.364012   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.364043   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.364070   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.364212   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.364231   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.364636   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.366138   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.366624   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.366672   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.367565   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.367575   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.367607   13429 out.go:177]   - Using image docker.io/busybox:stable
	I0404 21:31:16.367620   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.367735   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.367739   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.367819   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.369028   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0404 21:31:16.370390   13429 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0404 21:31:16.369044   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.370748   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.370753   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.370771   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.370917   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.371594   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.371976   13429 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0404 21:31:16.372239   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0404 21:31:16.374938   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0404 21:31:16.373583   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.373638   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.373648   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0404 21:31:16.373766   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.373792   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.373812   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.376336   13429 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0404 21:31:16.377897   13429 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0404 21:31:16.378987   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0404 21:31:16.376394   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.376755   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.377023   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.377914   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0404 21:31:16.380197   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.381760   13429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0404 21:31:16.380612   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.381182   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:16.383228   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:16.383516   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0404 21:31:16.383531   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0404 21:31:16.383547   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.384184   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.384461   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.384487   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.384657   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.384837   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.385011   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.385166   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.385435   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.386200   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.386224   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.386480   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.386555   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.386788   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.388098   13429 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0404 21:31:16.386957   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.388048   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.388684   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.389456   13429 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 21:31:16.389468   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 21:31:16.389480   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.389538   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.389560   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.389727   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.391063   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45609
	I0404 21:31:16.391180   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.391412   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.391496   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.391928   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.392292   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.392306   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.392650   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.392815   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.394313   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.396243   13429 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0404 21:31:16.394715   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0404 21:31:16.394894   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.395415   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.397434   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.397463   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.397541   13429 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0404 21:31:16.397552   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0404 21:31:16.397566   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.397566   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.397718   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.397833   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.397900   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.398868   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.398888   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.399344   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.399569   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.400459   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.400862   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.400893   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.401052   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.401083   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.401220   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.403042   13429 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0404 21:31:16.401372   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.403213   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.404525   13429 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0404 21:31:16.404557   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0404 21:31:16.404575   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.406155   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0404 21:31:16.406552   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:16.407263   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:16.407285   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:16.407693   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:16.407737   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.407912   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:16.408200   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.408224   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.408384   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.408552   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.408700   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.408839   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.409661   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:16.409869   13429 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 21:31:16.409887   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 21:31:16.409902   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:16.412215   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.412630   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:16.412656   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:16.412710   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:16.412866   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:16.412968   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:16.413043   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:16.720727   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0404 21:31:16.734351   13429 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0404 21:31:16.734368   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0404 21:31:16.760433   13429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:31:16.760470   13429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0404 21:31:16.775101   13429 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0404 21:31:16.775129   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0404 21:31:16.820623   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0404 21:31:16.847586   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:31:16.866298   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0404 21:31:16.867138   13429 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 21:31:16.867159   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0404 21:31:16.911577   13429 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0404 21:31:16.911607   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0404 21:31:16.914999   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0404 21:31:16.915017   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0404 21:31:16.917388   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 21:31:16.919157   13429 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0404 21:31:16.919174   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0404 21:31:16.928727   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0404 21:31:16.951568   13429 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0404 21:31:16.951588   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0404 21:31:16.953702   13429 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0404 21:31:16.953725   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0404 21:31:16.956043   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0404 21:31:16.982715   13429 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0404 21:31:16.982744   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0404 21:31:17.096216   13429 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0404 21:31:17.096246   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0404 21:31:17.121304   13429 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0404 21:31:17.121335   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0404 21:31:17.123539   13429 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 21:31:17.123563   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 21:31:17.148718   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0404 21:31:17.148750   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0404 21:31:17.170184   13429 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0404 21:31:17.170209   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0404 21:31:17.202330   13429 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0404 21:31:17.202356   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0404 21:31:17.211855   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0404 21:31:17.332337   13429 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0404 21:31:17.332369   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0404 21:31:17.333826   13429 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 21:31:17.333853   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 21:31:17.349541   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0404 21:31:17.349571   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0404 21:31:17.390064   13429 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0404 21:31:17.390091   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0404 21:31:17.417490   13429 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0404 21:31:17.417521   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0404 21:31:17.440968   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0404 21:31:17.494785   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0404 21:31:17.494814   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0404 21:31:17.525741   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 21:31:17.534706   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0404 21:31:17.534727   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0404 21:31:17.555540   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0404 21:31:17.603553   13429 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0404 21:31:17.603583   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0404 21:31:17.686043   13429 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0404 21:31:17.686073   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0404 21:31:17.751356   13429 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0404 21:31:17.751388   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0404 21:31:17.888436   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0404 21:31:17.928446   13429 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0404 21:31:17.928472   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0404 21:31:18.088257   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0404 21:31:18.088283   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0404 21:31:18.304346   13429 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0404 21:31:18.304375   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0404 21:31:18.363421   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0404 21:31:18.363448   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0404 21:31:18.453054   13429 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0404 21:31:18.453078   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0404 21:31:18.610761   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0404 21:31:18.610792   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0404 21:31:18.669004   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0404 21:31:18.869407   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0404 21:31:18.869429   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0404 21:31:19.128065   13429 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0404 21:31:19.128092   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0404 21:31:19.367529   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0404 21:31:20.933595   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.212829625s)
	I0404 21:31:20.933632   13429 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.173138116s)
	I0404 21:31:20.933646   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:20.933647   13429 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0404 21:31:20.933658   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:20.933697   13429 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.173224883s)
	I0404 21:31:20.933960   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:20.934004   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:20.934013   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:20.934030   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:20.934043   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:20.934413   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:20.934429   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:20.934451   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:20.956938   13429 node_ready.go:35] waiting up to 6m0s for node "addons-371778" to be "Ready" ...
	I0404 21:31:21.023547   13429 node_ready.go:49] node "addons-371778" has status "Ready":"True"
	I0404 21:31:21.023574   13429 node_ready.go:38] duration metric: took 66.606555ms for node "addons-371778" to be "Ready" ...
	I0404 21:31:21.023586   13429 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:31:21.118445   13429 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-l2rrs" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.256043   13429 pod_ready.go:92] pod "coredns-76f75df574-l2rrs" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:21.256082   13429 pod_ready.go:81] duration metric: took 137.605921ms for pod "coredns-76f75df574-l2rrs" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.256097   13429 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zrsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.486921   13429 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-371778" context rescaled to 1 replicas
	I0404 21:31:21.800790   13429 pod_ready.go:92] pod "coredns-76f75df574-zrsz7" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:21.800828   13429 pod_ready.go:81] duration metric: took 544.721524ms for pod "coredns-76f75df574-zrsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.800843   13429 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.969047   13429 pod_ready.go:92] pod "etcd-addons-371778" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:21.969078   13429 pod_ready.go:81] duration metric: took 168.225805ms for pod "etcd-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:21.969093   13429 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.147446   13429 pod_ready.go:92] pod "kube-apiserver-addons-371778" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:22.147481   13429 pod_ready.go:81] duration metric: took 178.379013ms for pod "kube-apiserver-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.147496   13429 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.213702   13429 pod_ready.go:92] pod "kube-controller-manager-addons-371778" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:22.213730   13429 pod_ready.go:81] duration metric: took 66.225675ms for pod "kube-controller-manager-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.213743   13429 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9x5lc" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.282556   13429 pod_ready.go:92] pod "kube-proxy-9x5lc" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:22.282601   13429 pod_ready.go:81] duration metric: took 68.848003ms for pod "kube-proxy-9x5lc" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.282617   13429 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.560188   13429 pod_ready.go:92] pod "kube-scheduler-addons-371778" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:22.560218   13429 pod_ready.go:81] duration metric: took 277.592463ms for pod "kube-scheduler-addons-371778" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:22.560233   13429 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:23.168481   13429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0404 21:31:23.168529   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:23.171802   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:23.172328   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:23.172354   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:23.172519   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:23.172720   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:23.172892   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:23.173056   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:23.885858   13429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0404 21:31:24.067364   13429 addons.go:234] Setting addon gcp-auth=true in "addons-371778"
	I0404 21:31:24.067424   13429 host.go:66] Checking if "addons-371778" exists ...
	I0404 21:31:24.067838   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:24.067880   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:24.083416   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0404 21:31:24.083870   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:24.084317   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:24.084331   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:24.084610   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:24.085207   13429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:31:24.085256   13429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:31:24.100476   13429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0404 21:31:24.100945   13429 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:31:24.101390   13429 main.go:141] libmachine: Using API Version  1
	I0404 21:31:24.101414   13429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:31:24.101827   13429 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:31:24.102080   13429 main.go:141] libmachine: (addons-371778) Calling .GetState
	I0404 21:31:24.103804   13429 main.go:141] libmachine: (addons-371778) Calling .DriverName
	I0404 21:31:24.104069   13429 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0404 21:31:24.104099   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHHostname
	I0404 21:31:24.106881   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:24.107311   13429 main.go:141] libmachine: (addons-371778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:f8:8c", ip: ""} in network mk-addons-371778: {Iface:virbr1 ExpiryTime:2024-04-04 22:30:34 +0000 UTC Type:0 Mac:52:54:00:20:f8:8c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-371778 Clientid:01:52:54:00:20:f8:8c}
	I0404 21:31:24.107349   13429 main.go:141] libmachine: (addons-371778) DBG | domain addons-371778 has defined IP address 192.168.39.212 and MAC address 52:54:00:20:f8:8c in network mk-addons-371778
	I0404 21:31:24.107472   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHPort
	I0404 21:31:24.107650   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHKeyPath
	I0404 21:31:24.107868   13429 main.go:141] libmachine: (addons-371778) Calling .GetSSHUsername
	I0404 21:31:24.108075   13429 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/addons-371778/id_rsa Username:docker}
	I0404 21:31:24.639131   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:25.391104   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.570437119s)
	I0404 21:31:25.391162   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391159   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.543538294s)
	I0404 21:31:25.391174   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391204   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391222   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391203   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.524874784s)
	I0404 21:31:25.391249   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.473834314s)
	I0404 21:31:25.391289   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391293   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.46254417s)
	I0404 21:31:25.391302   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391315   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391316   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.435249584s)
	I0404 21:31:25.391323   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391348   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391357   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391372   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391392   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.179510843s)
	I0404 21:31:25.391358   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391409   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391418   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391427   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.391455   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.391457   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.391476   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.950477226s)
	I0404 21:31:25.391491   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391500   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391518   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.391531   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391540   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391702   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.865929482s)
	I0404 21:31:25.391715   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.391728   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391738   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391769   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.391778   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.391787   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391795   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391817   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.836248782s)
	I0404 21:31:25.391832   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391841   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391868   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.391880   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.391888   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.391896   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391897   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.391922   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.391929   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.391933   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.503464656s)
	I0404 21:31:25.391953   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	W0404 21:31:25.391963   13429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0404 21:31:25.391972   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.391979   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.391984   13429 retry.go:31] will retry after 351.136351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0404 21:31:25.391986   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.392000   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.392006   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.392018   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.392026   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.392033   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.392040   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.392077   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.392094   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.392096   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.723056643s)
	I0404 21:31:25.392100   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.392108   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.392111   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.392115   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.392135   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.391940   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.392170   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.392210   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.392237   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.392245   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.393186   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.393209   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.393249   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.393257   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.393448   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.393457   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.394073   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.394111   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.394120   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.394871   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.394898   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.394928   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.394936   13429 addons.go:470] Verifying addon registry=true in "addons-371778"
	I0404 21:31:25.397582   13429 out.go:177] * Verifying registry addon...
	I0404 21:31:25.395175   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.395202   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.395223   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.395236   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.395254   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.395268   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.396277   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.396302   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.396325   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.397077   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.397133   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.399435   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399453   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399463   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399475   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.399499   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.399510   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399524   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399527   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.399534   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.399536   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.399542   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.399459   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399597   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.399605   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.399778   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.399830   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.399830   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.399838   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399849   13429 addons.go:470] Verifying addon metrics-server=true in "addons-371778"
	I0404 21:31:25.399855   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.399864   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.399873   13429 addons.go:470] Verifying addon ingress=true in "addons-371778"
	I0404 21:31:25.399934   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.401684   13429 out.go:177] * Verifying ingress addon...
	I0404 21:31:25.400339   13429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0404 21:31:25.400379   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.401936   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.402718   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.402760   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.403921   13429 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-371778 service yakd-dashboard -n yakd-dashboard
	
	I0404 21:31:25.403516   13429 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0404 21:31:25.420532   13429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0404 21:31:25.420577   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:25.420769   13429 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0404 21:31:25.420796   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:25.438791   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.438816   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.439090   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.439109   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:25.440444   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:25.440463   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:25.440714   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:25.440750   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:25.440763   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	W0404 21:31:25.440855   13429 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0404 21:31:25.743373   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0404 21:31:25.908623   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:25.909822   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:26.413265   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:26.415737   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:26.754804   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.387218902s)
	I0404 21:31:26.754875   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:26.754877   13429 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.650781039s)
	I0404 21:31:26.757180   13429 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0404 21:31:26.754888   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:26.760228   13429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0404 21:31:26.758781   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:26.758808   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:26.762128   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:26.762159   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:26.762163   13429 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0404 21:31:26.762170   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:26.762174   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0404 21:31:26.762421   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:26.762437   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:26.762449   13429 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-371778"
	I0404 21:31:26.763993   13429 out.go:177] * Verifying csi-hostpath-driver addon...
	I0404 21:31:26.765721   13429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0404 21:31:26.777514   13429 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0404 21:31:26.777536   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:26.846092   13429 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0404 21:31:26.846116   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0404 21:31:26.894193   13429 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0404 21:31:26.894215   13429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0404 21:31:26.935566   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:26.938991   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:27.014923   13429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0404 21:31:27.072782   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:27.272963   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:27.407385   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:27.410608   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:27.787643   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:27.923961   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:27.923972   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:28.272079   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:28.275391   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.531952658s)
	I0404 21:31:28.275453   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:28.275464   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:28.275798   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:28.275819   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:28.275827   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:28.275835   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:28.275805   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:28.276204   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:28.276232   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:28.276250   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:28.416030   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:28.416213   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:28.812803   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:28.814875   13429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.7999197s)
	I0404 21:31:28.814918   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:28.814926   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:28.815195   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:28.815215   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:28.815224   13429 main.go:141] libmachine: Making call to close driver server
	I0404 21:31:28.815233   13429 main.go:141] libmachine: (addons-371778) Calling .Close
	I0404 21:31:28.815232   13429 main.go:141] libmachine: (addons-371778) DBG | Closing plugin on server side
	I0404 21:31:28.815421   13429 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:31:28.815434   13429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:31:28.817779   13429 addons.go:470] Verifying addon gcp-auth=true in "addons-371778"
	I0404 21:31:28.819457   13429 out.go:177] * Verifying gcp-auth addon...
	I0404 21:31:28.821852   13429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0404 21:31:28.874395   13429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0404 21:31:28.874421   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:28.939368   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:28.945332   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:29.280809   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:29.326370   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:29.416359   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:29.420056   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:29.566900   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:29.773094   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:29.825906   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:29.909577   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:29.914660   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:30.274464   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:30.326176   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:30.407549   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:30.410761   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:30.773045   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:30.826493   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:30.908384   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:30.910696   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:31.272278   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:31.325943   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:31.408093   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:31.410031   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:31.771567   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:31.826501   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:31.907896   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:31.910086   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:32.069378   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:32.272399   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:32.326281   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:32.407752   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:32.410846   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:32.771630   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:32.826470   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:32.908231   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:32.910405   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:33.271914   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:33.326305   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:33.408211   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:33.411738   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:33.772759   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:33.825980   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:33.909916   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:33.911654   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:34.072200   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:34.273147   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:34.326135   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:34.407252   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:34.410370   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:34.771719   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:34.831256   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:34.907994   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:34.910330   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:35.271766   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:35.325817   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:35.409023   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:35.411475   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:35.772073   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:35.826061   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:35.908544   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:35.910837   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:36.271592   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:36.326910   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:36.408623   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:36.410828   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:36.810353   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:36.812815   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:36.825435   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:36.910357   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:36.911184   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:37.271794   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:37.327353   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:37.407839   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:37.410851   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:37.785370   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:37.829309   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:37.907642   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:37.911012   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:38.272780   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:38.326364   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:38.407956   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:38.411150   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:38.773422   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:38.826229   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:38.908008   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:38.911716   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:39.072130   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:39.272034   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:39.325838   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:39.409921   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:39.414052   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:39.771411   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:39.826501   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:39.909766   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:39.911865   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:40.272384   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:40.326726   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:40.409285   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:40.412641   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:40.773345   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:40.825819   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:40.908494   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:40.910504   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:41.272605   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:41.325666   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:41.410211   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:41.410518   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:41.566977   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:41.771694   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:41.826359   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:41.907804   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:41.910986   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:42.272100   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:42.326717   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:42.409093   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:42.411625   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:42.773977   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:42.826928   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:42.909730   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:42.910909   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:43.271904   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:43.325766   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:43.409611   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:43.411119   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:43.567227   13429 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"False"
	I0404 21:31:43.772357   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:43.826067   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:43.908006   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:43.911647   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:44.272707   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:44.325647   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:44.410534   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:44.416175   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:44.771563   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:44.829034   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:44.907825   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:44.910690   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:45.276202   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:45.326100   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:45.409242   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:45.410643   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:45.569089   13429 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace has status "Ready":"True"
	I0404 21:31:45.569113   13429 pod_ready.go:81] duration metric: took 23.008873131s for pod "nvidia-device-plugin-daemonset-cnk9f" in "kube-system" namespace to be "Ready" ...
	I0404 21:31:45.569122   13429 pod_ready.go:38] duration metric: took 24.545525716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:31:45.569136   13429 api_server.go:52] waiting for apiserver process to appear ...
	I0404 21:31:45.569190   13429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:31:45.591829   13429 api_server.go:72] duration metric: took 29.379106713s to wait for apiserver process to appear ...
	I0404 21:31:45.591851   13429 api_server.go:88] waiting for apiserver healthz status ...
	I0404 21:31:45.591869   13429 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0404 21:31:45.598696   13429 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0404 21:31:45.599875   13429 api_server.go:141] control plane version: v1.29.3
	I0404 21:31:45.599898   13429 api_server.go:131] duration metric: took 8.041353ms to wait for apiserver health ...
	I0404 21:31:45.599906   13429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 21:31:45.615274   13429 system_pods.go:59] 18 kube-system pods found
	I0404 21:31:45.615319   13429 system_pods.go:61] "coredns-76f75df574-l2rrs" [b37ee6fc-0ff9-4864-8eb0-797c13c2ebad] Running
	I0404 21:31:45.615339   13429 system_pods.go:61] "csi-hostpath-attacher-0" [6e75a14e-3ae0-4b64-935c-ce3cc78430e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0404 21:31:45.615355   13429 system_pods.go:61] "csi-hostpath-resizer-0" [6fd110f7-9c1a-4785-bac0-dfdceada9599] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0404 21:31:45.615410   13429 system_pods.go:61] "csi-hostpathplugin-sjsk8" [472dc225-bbff-4058-ad27-a9e8360750b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0404 21:31:45.615432   13429 system_pods.go:61] "etcd-addons-371778" [0aa16925-92dc-4518-912e-01c1f8c7076c] Running
	I0404 21:31:45.615457   13429 system_pods.go:61] "kube-apiserver-addons-371778" [40ee5932-840e-424e-bef3-f4739f6ab655] Running
	I0404 21:31:45.615475   13429 system_pods.go:61] "kube-controller-manager-addons-371778" [0b508a69-646d-4721-93f6-89a2b920abb2] Running
	I0404 21:31:45.615493   13429 system_pods.go:61] "kube-ingress-dns-minikube" [c89cb0ef-3601-4100-9a13-ef24f7df1c79] Running
	I0404 21:31:45.615511   13429 system_pods.go:61] "kube-proxy-9x5lc" [b741e9fb-25a1-4df5-8add-86a611026f90] Running
	I0404 21:31:45.615528   13429 system_pods.go:61] "kube-scheduler-addons-371778" [f9021f54-7c22-4294-b1a7-46d807fba13b] Running
	I0404 21:31:45.615555   13429 system_pods.go:61] "metrics-server-75d6c48ddd-4gcdm" [99896135-c9ec-418c-af55-cb7c8e9bee69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 21:31:45.615573   13429 system_pods.go:61] "nvidia-device-plugin-daemonset-cnk9f" [ddbb8390-14f9-4749-bf9d-28c23eca618a] Running
	I0404 21:31:45.615590   13429 system_pods.go:61] "registry-72422" [75fbb208-e940-4f84-ae37-d85e195edeaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0404 21:31:45.615614   13429 system_pods.go:61] "registry-proxy-nw2xt" [aae8dd6b-7489-4a11-91b8-b09ae3009693] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0404 21:31:45.615629   13429 system_pods.go:61] "snapshot-controller-58dbcc7b99-26qmc" [73650498-60b4-4f8e-ab00-61c51bfc170c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0404 21:31:45.615644   13429 system_pods.go:61] "snapshot-controller-58dbcc7b99-n769h" [79d018f5-2166-4bca-aeaa-41b781d57d5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0404 21:31:45.615656   13429 system_pods.go:61] "storage-provisioner" [d345bffd-4ee3-446e-a3ea-aa009385ee0f] Running
	I0404 21:31:45.615671   13429 system_pods.go:61] "tiller-deploy-7b677967b9-k2rdd" [012fb8a6-0e59-4491-93b3-98178f8b5f87] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0404 21:31:45.615685   13429 system_pods.go:74] duration metric: took 15.770721ms to wait for pod list to return data ...
	I0404 21:31:45.615702   13429 default_sa.go:34] waiting for default service account to be created ...
	I0404 21:31:45.619259   13429 default_sa.go:45] found service account: "default"
	I0404 21:31:45.619285   13429 default_sa.go:55] duration metric: took 3.571842ms for default service account to be created ...
	I0404 21:31:45.619295   13429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 21:31:45.628286   13429 system_pods.go:86] 18 kube-system pods found
	I0404 21:31:45.628318   13429 system_pods.go:89] "coredns-76f75df574-l2rrs" [b37ee6fc-0ff9-4864-8eb0-797c13c2ebad] Running
	I0404 21:31:45.628327   13429 system_pods.go:89] "csi-hostpath-attacher-0" [6e75a14e-3ae0-4b64-935c-ce3cc78430e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0404 21:31:45.628335   13429 system_pods.go:89] "csi-hostpath-resizer-0" [6fd110f7-9c1a-4785-bac0-dfdceada9599] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0404 21:31:45.628343   13429 system_pods.go:89] "csi-hostpathplugin-sjsk8" [472dc225-bbff-4058-ad27-a9e8360750b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0404 21:31:45.628348   13429 system_pods.go:89] "etcd-addons-371778" [0aa16925-92dc-4518-912e-01c1f8c7076c] Running
	I0404 21:31:45.628353   13429 system_pods.go:89] "kube-apiserver-addons-371778" [40ee5932-840e-424e-bef3-f4739f6ab655] Running
	I0404 21:31:45.628357   13429 system_pods.go:89] "kube-controller-manager-addons-371778" [0b508a69-646d-4721-93f6-89a2b920abb2] Running
	I0404 21:31:45.628362   13429 system_pods.go:89] "kube-ingress-dns-minikube" [c89cb0ef-3601-4100-9a13-ef24f7df1c79] Running
	I0404 21:31:45.628366   13429 system_pods.go:89] "kube-proxy-9x5lc" [b741e9fb-25a1-4df5-8add-86a611026f90] Running
	I0404 21:31:45.628377   13429 system_pods.go:89] "kube-scheduler-addons-371778" [f9021f54-7c22-4294-b1a7-46d807fba13b] Running
	I0404 21:31:45.628386   13429 system_pods.go:89] "metrics-server-75d6c48ddd-4gcdm" [99896135-c9ec-418c-af55-cb7c8e9bee69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 21:31:45.628392   13429 system_pods.go:89] "nvidia-device-plugin-daemonset-cnk9f" [ddbb8390-14f9-4749-bf9d-28c23eca618a] Running
	I0404 21:31:45.628402   13429 system_pods.go:89] "registry-72422" [75fbb208-e940-4f84-ae37-d85e195edeaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0404 21:31:45.628414   13429 system_pods.go:89] "registry-proxy-nw2xt" [aae8dd6b-7489-4a11-91b8-b09ae3009693] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0404 21:31:45.628423   13429 system_pods.go:89] "snapshot-controller-58dbcc7b99-26qmc" [73650498-60b4-4f8e-ab00-61c51bfc170c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0404 21:31:45.628434   13429 system_pods.go:89] "snapshot-controller-58dbcc7b99-n769h" [79d018f5-2166-4bca-aeaa-41b781d57d5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0404 21:31:45.628440   13429 system_pods.go:89] "storage-provisioner" [d345bffd-4ee3-446e-a3ea-aa009385ee0f] Running
	I0404 21:31:45.628446   13429 system_pods.go:89] "tiller-deploy-7b677967b9-k2rdd" [012fb8a6-0e59-4491-93b3-98178f8b5f87] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0404 21:31:45.628454   13429 system_pods.go:126] duration metric: took 9.153003ms to wait for k8s-apps to be running ...
	I0404 21:31:45.628464   13429 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 21:31:45.628518   13429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:31:45.645767   13429 system_svc.go:56] duration metric: took 17.292699ms WaitForService to wait for kubelet
	I0404 21:31:45.645804   13429 kubeadm.go:576] duration metric: took 29.433083909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:31:45.645828   13429 node_conditions.go:102] verifying NodePressure condition ...
	I0404 21:31:45.649240   13429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:31:45.649275   13429 node_conditions.go:123] node cpu capacity is 2
	I0404 21:31:45.649287   13429 node_conditions.go:105] duration metric: took 3.454253ms to run NodePressure ...
	I0404 21:31:45.649313   13429 start.go:240] waiting for startup goroutines ...
	I0404 21:31:45.778234   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:45.826572   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:45.916773   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:45.916801   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:46.273785   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:46.326066   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:46.408602   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:46.410394   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:46.772511   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:46.826328   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:46.913043   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:46.913534   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:47.274469   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:47.326605   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:47.409166   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:47.411320   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:47.771576   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:47.826721   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:47.908014   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:47.910934   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:48.272868   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:48.326440   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:48.408257   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:48.411470   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:48.772385   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:48.826245   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:48.908784   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:48.910365   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:49.271798   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:49.326657   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:49.409002   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:49.412408   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:49.772584   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:49.825977   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:49.909351   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:49.912168   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:50.277107   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:50.328964   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:50.409915   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:50.410611   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:50.774820   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:50.825880   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:50.907793   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:50.910557   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:51.274322   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:51.326212   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:51.409572   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:51.411448   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:51.778564   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:51.827127   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:51.907777   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:51.910642   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:52.278970   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:52.326939   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:52.415871   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:52.416936   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:52.776718   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:52.826348   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:52.907979   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:52.910346   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:53.588369   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:53.588498   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:53.590830   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:53.592509   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:53.771717   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:53.826116   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:53.909827   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:53.910559   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:54.271589   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:54.326075   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:54.414911   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:54.417121   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:54.771807   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:54.826785   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:54.908398   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:54.911457   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:55.271612   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:55.326559   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:55.408294   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:55.410575   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:55.772638   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:55.826517   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:55.908056   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:55.909960   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:56.272255   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:56.327038   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:56.409268   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:56.409924   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:56.772708   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:56.826576   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:56.911051   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:56.912202   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:57.273444   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:57.336232   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:57.408720   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:57.410537   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:57.771544   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:57.826179   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:57.907246   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:57.909727   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:58.272474   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:58.329385   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:58.408102   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:58.409617   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:58.772033   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:58.826588   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:58.909582   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:58.911287   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:59.273954   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:59.326799   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:59.408565   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:31:59.410698   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:59.772480   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:31:59.826679   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:31:59.912097   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:31:59.912383   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:00.272799   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:00.326489   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:00.413635   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:00.415611   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:00.771473   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:00.826224   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:00.909068   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:00.912333   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:01.275131   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:01.325268   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:01.481456   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:01.483446   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:01.772993   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:01.826766   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:01.910638   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:01.911269   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:02.272037   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:02.326006   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:02.411748   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:02.412114   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:02.775476   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:02.827281   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:02.910719   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:02.911266   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:03.283407   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:03.329652   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:03.411827   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:03.411972   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:03.771187   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:03.825297   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:03.907379   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:03.910254   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:04.273391   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:04.326910   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:04.416661   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:04.418289   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:04.772676   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:04.826036   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:04.908166   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:04.913975   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:05.276770   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:05.326079   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:05.408577   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:05.412227   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:05.971788   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:05.972831   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:05.973133   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:05.977289   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:06.272903   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:06.326661   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:06.407999   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:06.410971   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:06.775591   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:06.826242   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:06.907915   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:06.910561   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:07.276377   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:07.326156   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:07.407937   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:07.410953   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:07.772576   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:07.826600   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:07.908197   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:07.909628   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:08.272657   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:08.326580   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:08.411052   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:08.411865   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:08.772656   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:08.826298   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:08.908335   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:08.911105   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:09.271956   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:09.325885   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:09.409717   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:09.413479   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:09.771381   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:09.826106   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:09.912344   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:09.913700   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:10.272413   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:10.325893   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:10.408207   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:10.411085   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:10.772169   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:10.825698   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:10.908504   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:10.912804   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:11.273307   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:11.326529   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:11.979074   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:11.979293   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:11.979410   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:11.979843   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:11.985453   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:11.985635   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:12.271351   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:12.326602   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:12.408008   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:12.409366   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:12.771492   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:12.828936   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:12.908977   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:12.910439   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:13.271607   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:13.325651   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:13.409735   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:13.411066   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:13.772639   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:13.826148   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:13.908094   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:13.910630   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:14.271858   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:14.325646   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:14.425512   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:14.453137   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:14.773157   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:14.825708   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:14.907906   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:14.914614   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:15.271870   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:15.326406   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:15.409626   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:15.410468   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:15.771135   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:15.826136   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:15.910759   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:15.911293   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:16.273198   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:16.325677   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:16.407996   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:16.410170   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:16.770997   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:17.049169   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:17.050307   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:17.051016   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:17.272377   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:17.326020   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:17.409772   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:17.410639   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:17.772078   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:17.826724   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:17.911539   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:17.911900   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:18.272843   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:18.326069   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:18.409804   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:18.410076   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:18.771958   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:18.826204   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:18.907562   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:18.910337   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:19.271536   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:19.326981   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:19.408019   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:19.410788   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:19.771751   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:19.825526   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:19.910914   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:19.916022   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:20.272367   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:20.326438   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:20.409209   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:20.410452   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:20.773065   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:20.826284   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:20.907876   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:20.910846   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:21.272018   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:21.326633   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:21.407586   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:21.411062   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:21.772567   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:21.828430   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:21.907597   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:21.910764   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:22.272363   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:22.326151   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:22.407523   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0404 21:32:22.410120   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:22.771969   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:22.826384   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:22.910791   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:22.911373   13429 kapi.go:107] duration metric: took 57.511031517s to wait for kubernetes.io/minikube-addons=registry ...
	I0404 21:32:23.271709   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:23.325554   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:23.410408   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:23.801146   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:23.826348   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:23.911424   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:24.272414   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:24.326406   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:24.410823   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:24.772040   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:24.825814   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:24.909547   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:25.274470   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:25.325727   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:25.411241   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:25.772605   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:25.826747   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:25.911060   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:26.271285   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:26.326059   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:26.411760   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:26.773748   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:26.825505   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:26.911717   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:27.272661   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:27.328402   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:27.410900   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:27.777064   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:27.831135   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:27.910863   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:28.272425   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:28.326927   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:28.410395   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:28.772948   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:28.827418   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:28.910854   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:29.273167   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:29.326042   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:29.410655   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:29.771317   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:29.826396   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:29.911108   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:30.273572   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:30.327224   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:30.411156   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:30.772868   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:30.828632   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:30.919135   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:31.272936   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:31.327659   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:31.410130   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:31.773710   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:31.828867   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:31.919020   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:32.279747   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:32.325703   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:32.411062   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:32.772074   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:32.827368   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:32.910677   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:33.274517   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:33.335999   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:33.418382   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:33.785612   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:33.826599   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:33.911016   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:34.683289   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:34.683955   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:34.684540   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:34.775101   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:34.832020   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:34.931435   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:35.271483   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:35.326358   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:35.410477   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:35.772730   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:35.834468   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:35.910324   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:36.273082   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:36.326509   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:36.410540   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:36.772583   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:36.826419   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:36.911495   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:37.272135   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:37.327678   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:37.410791   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:37.772588   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:37.826481   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:37.911847   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:38.271823   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:38.325776   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:38.410752   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:38.771980   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:38.826001   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:38.911020   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:39.272022   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:39.325032   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0404 21:32:39.412896   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:39.773301   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:39.828422   13429 kapi.go:107] duration metric: took 1m11.006567114s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0404 21:32:39.830342   13429 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-371778 cluster.
	I0404 21:32:39.831868   13429 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0404 21:32:39.833328   13429 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0404 21:32:39.910934   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:40.272040   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:40.409597   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:40.772477   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:40.911412   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:41.273902   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:41.411594   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:41.774574   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:41.910307   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:42.274209   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:42.410210   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:42.771418   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:42.912851   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:43.271750   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:43.410417   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:43.772006   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:43.910255   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:44.271640   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:44.410315   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:44.771346   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:44.909637   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:45.271735   13429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0404 21:32:45.410531   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:45.771636   13429 kapi.go:107] duration metric: took 1m19.005912195s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0404 21:32:45.909699   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:46.409932   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:46.910416   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:47.411280   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:47.910247   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:48.410655   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:48.910423   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:49.411226   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:49.910943   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:50.410126   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:50.910253   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:51.410434   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:51.910899   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:52.410792   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:52.910137   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:53.410174   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:53.909970   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:54.413562   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:54.911991   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:55.411064   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:55.910229   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:56.410704   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:56.910028   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:57.410251   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:57.910472   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:58.415973   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:58.910405   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:59.411895   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:32:59.910088   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:00.410340   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:00.910311   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:01.412838   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:01.912338   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:02.411778   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:02.910321   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:03.411506   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:03.910826   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:04.410489   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:04.910921   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:05.409732   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:05.910542   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:06.411516   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:06.913133   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:07.411449   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:07.911617   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:08.410082   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:08.910353   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:09.411093   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:09.910767   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:10.409824   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:10.910341   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:11.411454   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:11.911033   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:12.410929   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:12.911211   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:13.411168   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:13.913118   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:14.410146   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:14.910739   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:15.410646   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:15.910641   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:16.412072   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:16.910350   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:17.410910   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:17.912586   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:18.411692   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:18.910945   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:19.410850   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:19.910759   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:20.410070   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:20.910095   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:21.411349   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:21.911075   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:22.411177   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:22.914940   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:23.410353   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:23.910127   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:24.411143   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:24.910142   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:25.413439   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:25.911389   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:26.412066   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:26.910050   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:27.413219   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:27.910127   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:28.410491   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:28.910879   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:29.410378   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:29.910772   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:30.410465   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:30.910943   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:31.410532   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:31.913554   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:32.411158   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:32.910628   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:33.411140   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:33.910251   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:34.410742   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:34.909869   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:35.410348   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:35.911722   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:36.410890   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:36.910020   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:37.410330   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:37.911292   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:38.410883   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:38.910390   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:39.410298   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:39.910969   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:40.409974   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:40.910476   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:41.410648   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:41.910466   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:42.410352   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:42.911866   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:43.409647   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:43.912019   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:44.411471   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:44.911388   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:45.410776   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:45.911192   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:46.410969   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:46.911200   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:47.409824   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:47.910440   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:48.410580   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:48.910855   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:49.411209   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:49.910752   13429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0404 21:33:50.410037   13429 kapi.go:107] duration metric: took 2m25.006520332s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0404 21:33:50.412016   13429 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0404 21:33:50.413740   13429 addons.go:505] duration metric: took 2m34.200924841s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner nvidia-device-plugin helm-tiller metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0404 21:33:50.413828   13429 start.go:245] waiting for cluster config update ...
	I0404 21:33:50.413854   13429 start.go:254] writing updated cluster config ...
	I0404 21:33:50.414142   13429 ssh_runner.go:195] Run: rm -f paused
	I0404 21:33:50.467801   13429 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 21:33:50.469819   13429 out.go:177] * Done! kubectl is now configured to use "addons-371778" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.747208448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712266611747173514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60f3dfc8-9f95-4184-8abe-4bd96ed31430 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.747963566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5aa7291-b8f3-47b5-a73e-75433af3c8be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.748034829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5aa7291-b8f3-47b5-a73e-75433af3c8be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.748354349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2b57a668d7946400c1d7019e5b90dc09596df670ce729c03dacfa8e1f5e541e,PodSandboxId:8e9b8ceb3e47bdc6302f10886b46badb1f2623ad272b96d38d7c2af756a79d82,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712266605680155933,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-w4qkl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 295f8e6d-f849-41e7-b4c9-2602055db742,},Annotations:map[string]string{io.kubernetes.container.hash: fd6daeaa,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59def19f256e0fdd30a094a5f0690c4f8b59f273f210b8efd1e88c943107d16,PodSandboxId:aa62df047f2962bbc614e4677f8819eefa9fd50bd9d5cbd165454172b44bb59a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712266474219823556,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-phlpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 94f2c53a-d004-4235-ab7f-d56fab607309,},Annota
tions:map[string]string{io.kubernetes.container.hash: b1bbfe3e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cbe3469e0fab7a26d5b847f7e06a4113bee5e44e7050e27b27c480b4317ab,PodSandboxId:ae76d4b571236f9b75b893a5069ffbdfc587acdd9063730e5a6ea305e7768243,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712266464688990472,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 695f5ffb-1ddc-4d3c-876b-41c0e72062f7,},Annotations:map[string]string{io.kubernetes.container.hash: 534dbdcb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501a83666c0c541ed4c8850c28cc0f08f04b90dcf36917470ef7b4a5a7541c1f,PodSandboxId:a7f5be98a3475025e428b6c46996f8fed529f780c9bf211cfb9809276e42ceba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712266358522522145,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-rq7sf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 70bf348e-a2ef-49b2-ba9f-fe2022dad2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 4965a982,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4570e7767e17bf74f1ffecff12d5adfc2242e5899ace5fcf8ffb869406fde2,PodSandboxId:33f079051b1fe7f30af1f83babf1958586349658b31dd19ffc227cf4cc0f7e63,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712266350865064598,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kndb4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 789815ed-7dff-4847-9ebe-6543bed84702,},Annotations:map[string]string{io.kubernetes.container.hash: e2b8fa52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5b53edf30611c4e64dea21de5b8eb0a43aba8933971e5cccb2191f3b8c5a5b,PodSandboxId:b3d40f4fc87f3a7e810951042a11bf72f32248d8c54d5a986d5da6efda269111,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712266349848512388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6n22j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 748c86df-01e3-4a31-9b89-1ad812781d34,},Annotations:map[string]string{io.kubernetes.container.hash: f00317f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d4363b3d89df38a3241f4686216f68d35168ec298131d211786399ef2d22d2,PodSandboxId:f3d3d05cabd8b389c90395e2510667e93ca6ee3a6b1768d0d75ca6ae923be688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712266332089948766,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-88stp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9f86c2c9-62b3-41e7-9373-de69268e4332,},Annotations:map[string]string{io.kubernetes.container.hash: 61f344d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8ae4b9f83ff9ac2399a6afb68e9ce85aaa996f977d64a49916d3e165ab3175,PodSandboxId:5ca97d91392e3642bd4088dd0a4d8357bc1579330ed9eb1ac3421973b57c9061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712266283239684349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d345bffd-4ee3-446e-a3ea-aa009385ee0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6447da42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b045773f9be62925bc02e7f0daa375abe5d50be87bd429c462cf699f9916f6,PodSandboxId:f0f7609eef8ce39bd1e410c1cc8825d63a62d92d1d04b88395585feb8d3fb665,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712266279549189712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l2rrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37ee6fc-0ff9-4864-8eb0-797c13c2ebad,},Annotations:map[string]string{io.kubernetes.container.hash: 5191034,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac21f7dde84b870400b416c3136bf7da83c1fb32b85348d9f857b1b0b5950f,PodSandboxId:cd209b32f03f90d796c663306e919ffda36119272def71671110ab8f183aa1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712266276028665270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x5lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741e9fb-25a1-4df5-8add-86a611026f90,},Annotations:map[string]string{io.kubernetes.container.hash: 1c371bcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfce3317a31e882defaca9c4cfc54444108d2c4bd2af93764963ba50cfd96897,PodSandboxId:f2584d4e59f2d8219d8a0855847afc68b361b964fddfaed62e37c8d79019cfac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390
d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712266256835954660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd4efcbc00bd110e5eb5e90c1fa46df,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee2be11fcfff16b0b41d2b2b7941dd9ce0212fca778a5dd4cf38f8ecbc4bb838,PodSandboxId:bfbd766e9aa26adbe02cfc7c73492fc0a5ab4e7679130460c89acd089cee82f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97
387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712266256794952480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d97b3416948b1f463ea4a0688c7b44,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc59c983460025ea40b10d01d83bc4a5ee127691a3b67f33dd6ca1cf50e11a,PodSandboxId:8ef374cdcaf4b008da4470c5047cbe57ccc78fb2507fcd145f8886e2a09cf009,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712266256809212084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 842ad2383a929092f124e597b7364770,},Annotations:map[string]string{io.kubernetes.container.hash: f696e9aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4eeb26158b8c7d387bcfcb089a6bf87240ebf319c906aa4879ad38dc696ffc7,PodSandboxId:1dfdf02e42fdd73fa6b38647b2702392f98ee7690d630b5f7af8da62cce08d62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efa
ab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712266256800075347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 828fd7aff758dd125e98081343dd7f86,},Annotations:map[string]string{io.kubernetes.container.hash: 5426a594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5aa7291-b8f3-47b5-a73e-75433af3c8be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.789932767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6442470a-bbaa-480d-9098-3da8f88f2db8 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.790493672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6442470a-bbaa-480d-9098-3da8f88f2db8 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.791942845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba96be3b-3388-4c09-93d6-c14470dba29b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.793714617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712266611793685955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba96be3b-3388-4c09-93d6-c14470dba29b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.794341722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18e4190b-7abc-4439-87cd-b9b024b797e8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.794471495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18e4190b-7abc-4439-87cd-b9b024b797e8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.795146452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2b57a668d7946400c1d7019e5b90dc09596df670ce729c03dacfa8e1f5e541e,PodSandboxId:8e9b8ceb3e47bdc6302f10886b46badb1f2623ad272b96d38d7c2af756a79d82,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712266605680155933,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-w4qkl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 295f8e6d-f849-41e7-b4c9-2602055db742,},Annotations:map[string]string{io.kubernetes.container.hash: fd6daeaa,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59def19f256e0fdd30a094a5f0690c4f8b59f273f210b8efd1e88c943107d16,PodSandboxId:aa62df047f2962bbc614e4677f8819eefa9fd50bd9d5cbd165454172b44bb59a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712266474219823556,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-phlpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 94f2c53a-d004-4235-ab7f-d56fab607309,},Annota
tions:map[string]string{io.kubernetes.container.hash: b1bbfe3e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cbe3469e0fab7a26d5b847f7e06a4113bee5e44e7050e27b27c480b4317ab,PodSandboxId:ae76d4b571236f9b75b893a5069ffbdfc587acdd9063730e5a6ea305e7768243,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712266464688990472,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 695f5ffb-1ddc-4d3c-876b-41c0e72062f7,},Annotations:map[string]string{io.kubernetes.container.hash: 534dbdcb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501a83666c0c541ed4c8850c28cc0f08f04b90dcf36917470ef7b4a5a7541c1f,PodSandboxId:a7f5be98a3475025e428b6c46996f8fed529f780c9bf211cfb9809276e42ceba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712266358522522145,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-rq7sf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 70bf348e-a2ef-49b2-ba9f-fe2022dad2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 4965a982,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4570e7767e17bf74f1ffecff12d5adfc2242e5899ace5fcf8ffb869406fde2,PodSandboxId:33f079051b1fe7f30af1f83babf1958586349658b31dd19ffc227cf4cc0f7e63,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712266350865064598,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kndb4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 789815ed-7dff-4847-9ebe-6543bed84702,},Annotations:map[string]string{io.kubernetes.container.hash: e2b8fa52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5b53edf30611c4e64dea21de5b8eb0a43aba8933971e5cccb2191f3b8c5a5b,PodSandboxId:b3d40f4fc87f3a7e810951042a11bf72f32248d8c54d5a986d5da6efda269111,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712266349848512388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6n22j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 748c86df-01e3-4a31-9b89-1ad812781d34,},Annotations:map[string]string{io.kubernetes.container.hash: f00317f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d4363b3d89df38a3241f4686216f68d35168ec298131d211786399ef2d22d2,PodSandboxId:f3d3d05cabd8b389c90395e2510667e93ca6ee3a6b1768d0d75ca6ae923be688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712266332089948766,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-88stp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9f86c2c9-62b3-41e7-9373-de69268e4332,},Annotations:map[string]string{io.kubernetes.container.hash: 61f344d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8ae4b9f83ff9ac2399a6afb68e9ce85aaa996f977d64a49916d3e165ab3175,PodSandboxId:5ca97d91392e3642bd4088dd0a4d8357bc1579330ed9eb1ac3421973b57c9061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712266283239684349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d345bffd-4ee3-446e-a3ea-aa009385ee0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6447da42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b045773f9be62925bc02e7f0daa375abe5d50be87bd429c462cf699f9916f6,PodSandboxId:f0f7609eef8ce39bd1e410c1cc8825d63a62d92d1d04b88395585feb8d3fb665,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712266279549189712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l2rrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37ee6fc-0ff9-4864-8eb0-797c13c2ebad,},Annotations:map[string]string{io.kubernetes.container.hash: 5191034,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac21f7dde84b870400b416c3136bf7da83c1fb32b85348d9f857b1b0b5950f,PodSandboxId:cd209b32f03f90d796c663306e919ffda36119272def71671110ab8f183aa1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712266276028665270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x5lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741e9fb-25a1-4df5-8add-86a611026f90,},Annotations:map[string]string{io.kubernetes.container.hash: 1c371bcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfce3317a31e882defaca9c4cfc54444108d2c4bd2af93764963ba50cfd96897,PodSandboxId:f2584d4e59f2d8219d8a0855847afc68b361b964fddfaed62e37c8d79019cfac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390
d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712266256835954660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd4efcbc00bd110e5eb5e90c1fa46df,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee2be11fcfff16b0b41d2b2b7941dd9ce0212fca778a5dd4cf38f8ecbc4bb838,PodSandboxId:bfbd766e9aa26adbe02cfc7c73492fc0a5ab4e7679130460c89acd089cee82f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97
387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712266256794952480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d97b3416948b1f463ea4a0688c7b44,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc59c983460025ea40b10d01d83bc4a5ee127691a3b67f33dd6ca1cf50e11a,PodSandboxId:8ef374cdcaf4b008da4470c5047cbe57ccc78fb2507fcd145f8886e2a09cf009,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712266256809212084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 842ad2383a929092f124e597b7364770,},Annotations:map[string]string{io.kubernetes.container.hash: f696e9aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4eeb26158b8c7d387bcfcb089a6bf87240ebf319c906aa4879ad38dc696ffc7,PodSandboxId:1dfdf02e42fdd73fa6b38647b2702392f98ee7690d630b5f7af8da62cce08d62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efa
ab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712266256800075347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 828fd7aff758dd125e98081343dd7f86,},Annotations:map[string]string{io.kubernetes.container.hash: 5426a594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18e4190b-7abc-4439-87cd-b9b024b797e8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.830334127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e83b4b3-a707-43d8-831a-f12fb5818085 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.830677860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e83b4b3-a707-43d8-831a-f12fb5818085 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.832147093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2106f80d-a739-4bac-b429-bdee1cde7972 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.833747209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712266611833719696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2106f80d-a739-4bac-b429-bdee1cde7972 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.834290356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d6e1665-6f80-47d7-9e5d-b72327bbe7a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.834358565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d6e1665-6f80-47d7-9e5d-b72327bbe7a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.834738896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2b57a668d7946400c1d7019e5b90dc09596df670ce729c03dacfa8e1f5e541e,PodSandboxId:8e9b8ceb3e47bdc6302f10886b46badb1f2623ad272b96d38d7c2af756a79d82,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712266605680155933,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-w4qkl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 295f8e6d-f849-41e7-b4c9-2602055db742,},Annotations:map[string]string{io.kubernetes.container.hash: fd6daeaa,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59def19f256e0fdd30a094a5f0690c4f8b59f273f210b8efd1e88c943107d16,PodSandboxId:aa62df047f2962bbc614e4677f8819eefa9fd50bd9d5cbd165454172b44bb59a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712266474219823556,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-phlpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 94f2c53a-d004-4235-ab7f-d56fab607309,},Annota
tions:map[string]string{io.kubernetes.container.hash: b1bbfe3e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cbe3469e0fab7a26d5b847f7e06a4113bee5e44e7050e27b27c480b4317ab,PodSandboxId:ae76d4b571236f9b75b893a5069ffbdfc587acdd9063730e5a6ea305e7768243,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712266464688990472,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 695f5ffb-1ddc-4d3c-876b-41c0e72062f7,},Annotations:map[string]string{io.kubernetes.container.hash: 534dbdcb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501a83666c0c541ed4c8850c28cc0f08f04b90dcf36917470ef7b4a5a7541c1f,PodSandboxId:a7f5be98a3475025e428b6c46996f8fed529f780c9bf211cfb9809276e42ceba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712266358522522145,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-rq7sf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 70bf348e-a2ef-49b2-ba9f-fe2022dad2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 4965a982,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4570e7767e17bf74f1ffecff12d5adfc2242e5899ace5fcf8ffb869406fde2,PodSandboxId:33f079051b1fe7f30af1f83babf1958586349658b31dd19ffc227cf4cc0f7e63,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712266350865064598,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kndb4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 789815ed-7dff-4847-9ebe-6543bed84702,},Annotations:map[string]string{io.kubernetes.container.hash: e2b8fa52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5b53edf30611c4e64dea21de5b8eb0a43aba8933971e5cccb2191f3b8c5a5b,PodSandboxId:b3d40f4fc87f3a7e810951042a11bf72f32248d8c54d5a986d5da6efda269111,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712266349848512388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6n22j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 748c86df-01e3-4a31-9b89-1ad812781d34,},Annotations:map[string]string{io.kubernetes.container.hash: f00317f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d4363b3d89df38a3241f4686216f68d35168ec298131d211786399ef2d22d2,PodSandboxId:f3d3d05cabd8b389c90395e2510667e93ca6ee3a6b1768d0d75ca6ae923be688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712266332089948766,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-88stp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9f86c2c9-62b3-41e7-9373-de69268e4332,},Annotations:map[string]string{io.kubernetes.container.hash: 61f344d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8ae4b9f83ff9ac2399a6afb68e9ce85aaa996f977d64a49916d3e165ab3175,PodSandboxId:5ca97d91392e3642bd4088dd0a4d8357bc1579330ed9eb1ac3421973b57c9061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712266283239684349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d345bffd-4ee3-446e-a3ea-aa009385ee0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6447da42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b045773f9be62925bc02e7f0daa375abe5d50be87bd429c462cf699f9916f6,PodSandboxId:f0f7609eef8ce39bd1e410c1cc8825d63a62d92d1d04b88395585feb8d3fb665,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712266279549189712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l2rrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37ee6fc-0ff9-4864-8eb0-797c13c2ebad,},Annotations:map[string]string{io.kubernetes.container.hash: 5191034,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac21f7dde84b870400b416c3136bf7da83c1fb32b85348d9f857b1b0b5950f,PodSandboxId:cd209b32f03f90d796c663306e919ffda36119272def71671110ab8f183aa1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712266276028665270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x5lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741e9fb-25a1-4df5-8add-86a611026f90,},Annotations:map[string]string{io.kubernetes.container.hash: 1c371bcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfce3317a31e882defaca9c4cfc54444108d2c4bd2af93764963ba50cfd96897,PodSandboxId:f2584d4e59f2d8219d8a0855847afc68b361b964fddfaed62e37c8d79019cfac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390
d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712266256835954660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd4efcbc00bd110e5eb5e90c1fa46df,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee2be11fcfff16b0b41d2b2b7941dd9ce0212fca778a5dd4cf38f8ecbc4bb838,PodSandboxId:bfbd766e9aa26adbe02cfc7c73492fc0a5ab4e7679130460c89acd089cee82f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97
387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712266256794952480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d97b3416948b1f463ea4a0688c7b44,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc59c983460025ea40b10d01d83bc4a5ee127691a3b67f33dd6ca1cf50e11a,PodSandboxId:8ef374cdcaf4b008da4470c5047cbe57ccc78fb2507fcd145f8886e2a09cf009,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712266256809212084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 842ad2383a929092f124e597b7364770,},Annotations:map[string]string{io.kubernetes.container.hash: f696e9aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4eeb26158b8c7d387bcfcb089a6bf87240ebf319c906aa4879ad38dc696ffc7,PodSandboxId:1dfdf02e42fdd73fa6b38647b2702392f98ee7690d630b5f7af8da62cce08d62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efa
ab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712266256800075347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 828fd7aff758dd125e98081343dd7f86,},Annotations:map[string]string{io.kubernetes.container.hash: 5426a594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d6e1665-6f80-47d7-9e5d-b72327bbe7a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.885709111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21603668-64d3-4cbd-aefc-428547fffcec name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.885783986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21603668-64d3-4cbd-aefc-428547fffcec name=/runtime.v1.RuntimeService/Version
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.886991038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35a31b93-2deb-4274-9bcc-27d96ad9324f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.888751594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712266611888723019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35a31b93-2deb-4274-9bcc-27d96ad9324f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.890010118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0054ebc2-d4e6-4bcb-af6a-16a9b2607b09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.890069658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0054ebc2-d4e6-4bcb-af6a-16a9b2607b09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:36:51 addons-371778 crio[685]: time="2024-04-04 21:36:51.890451715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2b57a668d7946400c1d7019e5b90dc09596df670ce729c03dacfa8e1f5e541e,PodSandboxId:8e9b8ceb3e47bdc6302f10886b46badb1f2623ad272b96d38d7c2af756a79d82,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712266605680155933,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-w4qkl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 295f8e6d-f849-41e7-b4c9-2602055db742,},Annotations:map[string]string{io.kubernetes.container.hash: fd6daeaa,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59def19f256e0fdd30a094a5f0690c4f8b59f273f210b8efd1e88c943107d16,PodSandboxId:aa62df047f2962bbc614e4677f8819eefa9fd50bd9d5cbd165454172b44bb59a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712266474219823556,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-phlpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 94f2c53a-d004-4235-ab7f-d56fab607309,},Annota
tions:map[string]string{io.kubernetes.container.hash: b1bbfe3e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cbe3469e0fab7a26d5b847f7e06a4113bee5e44e7050e27b27c480b4317ab,PodSandboxId:ae76d4b571236f9b75b893a5069ffbdfc587acdd9063730e5a6ea305e7768243,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712266464688990472,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 695f5ffb-1ddc-4d3c-876b-41c0e72062f7,},Annotations:map[string]string{io.kubernetes.container.hash: 534dbdcb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501a83666c0c541ed4c8850c28cc0f08f04b90dcf36917470ef7b4a5a7541c1f,PodSandboxId:a7f5be98a3475025e428b6c46996f8fed529f780c9bf211cfb9809276e42ceba,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712266358522522145,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-rq7sf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 70bf348e-a2ef-49b2-ba9f-fe2022dad2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 4965a982,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4570e7767e17bf74f1ffecff12d5adfc2242e5899ace5fcf8ffb869406fde2,PodSandboxId:33f079051b1fe7f30af1f83babf1958586349658b31dd19ffc227cf4cc0f7e63,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712266350865064598,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kndb4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 789815ed-7dff-4847-9ebe-6543bed84702,},Annotations:map[string]string{io.kubernetes.container.hash: e2b8fa52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5b53edf30611c4e64dea21de5b8eb0a43aba8933971e5cccb2191f3b8c5a5b,PodSandboxId:b3d40f4fc87f3a7e810951042a11bf72f32248d8c54d5a986d5da6efda269111,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712266349848512388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6n22j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 748c86df-01e3-4a31-9b89-1ad812781d34,},Annotations:map[string]string{io.kubernetes.container.hash: f00317f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d4363b3d89df38a3241f4686216f68d35168ec298131d211786399ef2d22d2,PodSandboxId:f3d3d05cabd8b389c90395e2510667e93ca6ee3a6b1768d0d75ca6ae923be688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712266332089948766,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-88stp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9f86c2c9-62b3-41e7-9373-de69268e4332,},Annotations:map[string]string{io.kubernetes.container.hash: 61f344d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8ae4b9f83ff9ac2399a6afb68e9ce85aaa996f977d64a49916d3e165ab3175,PodSandboxId:5ca97d91392e3642bd4088dd0a4d8357bc1579330ed9eb1ac3421973b57c9061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712266283239684349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d345bffd-4ee3-446e-a3ea-aa009385ee0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6447da42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b045773f9be62925bc02e7f0daa375abe5d50be87bd429c462cf699f9916f6,PodSandboxId:f0f7609eef8ce39bd1e410c1cc8825d63a62d92d1d04b88395585feb8d3fb665,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712266279549189712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l2rrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37ee6fc-0ff9-4864-8eb0-797c13c2ebad,},Annotations:map[string]string{io.kubernetes.container.hash: 5191034,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac21f7dde84b870400b416c3136bf7da83c1fb32b85348d9f857b1b0b5950f,PodSandboxId:cd209b32f03f90d796c663306e919ffda36119272def71671110ab8f183aa1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712266276028665270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x5lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741e9fb-25a1-4df5-8add-86a611026f90,},Annotations:map[string]string{io.kubernetes.container.hash: 1c371bcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfce3317a31e882defaca9c4cfc54444108d2c4bd2af93764963ba50cfd96897,PodSandboxId:f2584d4e59f2d8219d8a0855847afc68b361b964fddfaed62e37c8d79019cfac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390
d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712266256835954660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd4efcbc00bd110e5eb5e90c1fa46df,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee2be11fcfff16b0b41d2b2b7941dd9ce0212fca778a5dd4cf38f8ecbc4bb838,PodSandboxId:bfbd766e9aa26adbe02cfc7c73492fc0a5ab4e7679130460c89acd089cee82f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97
387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712266256794952480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d97b3416948b1f463ea4a0688c7b44,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc59c983460025ea40b10d01d83bc4a5ee127691a3b67f33dd6ca1cf50e11a,PodSandboxId:8ef374cdcaf4b008da4470c5047cbe57ccc78fb2507fcd145f8886e2a09cf009,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712266256809212084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 842ad2383a929092f124e597b7364770,},Annotations:map[string]string{io.kubernetes.container.hash: f696e9aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4eeb26158b8c7d387bcfcb089a6bf87240ebf319c906aa4879ad38dc696ffc7,PodSandboxId:1dfdf02e42fdd73fa6b38647b2702392f98ee7690d630b5f7af8da62cce08d62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efa
ab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712266256800075347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 828fd7aff758dd125e98081343dd7f86,},Annotations:map[string]string{io.kubernetes.container.hash: 5426a594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0054ebc2-d4e6-4bcb-af6a-16a9b2607b09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2b57a668d794       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   8e9b8ceb3e47b       hello-world-app-5d77478584-w4qkl
	e59def19f256e       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   aa62df047f296       headlamp-5b77dbd7c4-phlpc
	b43cbe3469e0f       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   ae76d4b571236       nginx
	501a83666c0c5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 4 minutes ago       Running             gcp-auth                  0                   a7f5be98a3475       gcp-auth-7d69788767-rq7sf
	1f4570e7767e1       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             4 minutes ago       Exited              patch                     1                   33f079051b1fe       ingress-nginx-admission-patch-kndb4
	2b5b53edf3061       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              create                    0                   b3d40f4fc87f3       ingress-nginx-admission-create-6n22j
	b6d4363b3d89d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   f3d3d05cabd8b       yakd-dashboard-9947fc6bf-88stp
	2a8ae4b9f83ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   5ca97d91392e3       storage-provisioner
	48b045773f9be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   f0f7609eef8ce       coredns-76f75df574-l2rrs
	49ac21f7dde84       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             5 minutes ago       Running             kube-proxy                0                   cd209b32f03f9       kube-proxy-9x5lc
	bfce3317a31e8       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             5 minutes ago       Running             kube-scheduler            0                   f2584d4e59f2d       kube-scheduler-addons-371778
	33dc59c983460       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   8ef374cdcaf4b       etcd-addons-371778
	d4eeb26158b8c       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             5 minutes ago       Running             kube-apiserver            0                   1dfdf02e42fdd       kube-apiserver-addons-371778
	ee2be11fcfff1       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             5 minutes ago       Running             kube-controller-manager   0                   bfbd766e9aa26       kube-controller-manager-addons-371778
	
	
	==> coredns [48b045773f9be62925bc02e7f0daa375abe5d50be87bd429c462cf699f9916f6] <==
	[INFO] 10.244.0.8:59015 - 56913 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000789352s
	[INFO] 10.244.0.8:40447 - 265 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121385s
	[INFO] 10.244.0.8:40447 - 40203 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000399261s
	[INFO] 10.244.0.8:54205 - 56064 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056499s
	[INFO] 10.244.0.8:54205 - 58114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007258s
	[INFO] 10.244.0.8:42363 - 6852 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093653s
	[INFO] 10.244.0.8:42363 - 48583 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148258s
	[INFO] 10.244.0.8:58167 - 64522 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101171s
	[INFO] 10.244.0.8:58167 - 41743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000050991s
	[INFO] 10.244.0.8:42400 - 24593 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065859s
	[INFO] 10.244.0.8:42400 - 64279 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027739s
	[INFO] 10.244.0.8:35811 - 28022 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073958s
	[INFO] 10.244.0.8:35811 - 20848 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026241s
	[INFO] 10.244.0.8:51157 - 25375 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118978s
	[INFO] 10.244.0.8:51157 - 62748 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060369s
	[INFO] 10.244.0.21:52485 - 43091 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000303938s
	[INFO] 10.244.0.21:60199 - 23655 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000124042s
	[INFO] 10.244.0.21:41020 - 31704 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118923s
	[INFO] 10.244.0.21:38577 - 16065 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123603s
	[INFO] 10.244.0.21:53875 - 15156 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075091s
	[INFO] 10.244.0.21:54426 - 17098 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009727s
	[INFO] 10.244.0.21:40787 - 4219 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001018665s
	[INFO] 10.244.0.21:47672 - 21121 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000923403s
	[INFO] 10.244.0.26:49481 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00069257s
	[INFO] 10.244.0.26:48337 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150369s
	
	
	==> describe nodes <==
	Name:               addons-371778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-371778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=addons-371778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T21_31_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-371778
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:30:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-371778
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:36:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:34:38 +0000   Thu, 04 Apr 2024 21:30:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:34:38 +0000   Thu, 04 Apr 2024 21:30:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:34:38 +0000   Thu, 04 Apr 2024 21:30:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:34:38 +0000   Thu, 04 Apr 2024 21:31:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-371778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ea27379ea5b4fa5a7fee6275defeedc
	  System UUID:                8ea27379-ea5b-4fa5-a7fe-e6275defeedc
	  Boot ID:                    7d464cbc-b304-46fa-899c-63e916952630
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-w4qkl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-7d69788767-rq7sf                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  headlamp                    headlamp-5b77dbd7c4-phlpc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 coredns-76f75df574-l2rrs                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m37s
	  kube-system                 etcd-addons-371778                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-apiserver-addons-371778             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-controller-manager-addons-371778    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-proxy-9x5lc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-scheduler-addons-371778             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-88stp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  Starting                 5m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node addons-371778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node addons-371778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node addons-371778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m49s                  kubelet          Node addons-371778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s                  kubelet          Node addons-371778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s                  kubelet          Node addons-371778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet          Node addons-371778 status is now: NodeReady
	  Normal  RegisteredNode           5m38s                  node-controller  Node addons-371778 event: Registered Node addons-371778 in Controller
	
	
	==> dmesg <==
	[  +0.088906] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.863766] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.391628] systemd-fstab-generator[1601]: Ignoring "noauto" option for root device
	[  +4.731688] kauditd_printk_skb: 99 callbacks suppressed
	[  +5.085004] kauditd_printk_skb: 127 callbacks suppressed
	[  +7.918215] kauditd_printk_skb: 100 callbacks suppressed
	[Apr 4 21:32] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.371505] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.834892] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.903134] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.009148] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.700758] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.150433] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 4 21:33] kauditd_printk_skb: 24 callbacks suppressed
	[ +17.388279] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.414899] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.458874] kauditd_printk_skb: 16 callbacks suppressed
	[Apr 4 21:34] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.304567] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.016443] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.898558] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.623024] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.520896] kauditd_printk_skb: 52 callbacks suppressed
	[Apr 4 21:36] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.299670] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [33dc59c983460025ea40b10d01d83bc4a5ee127691a3b67f33dd6ca1cf50e11a] <==
	{"level":"warn","ts":"2024-04-04T21:32:17.042894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.996377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85182"}
	{"level":"info","ts":"2024-04-04T21:32:17.042943Z","caller":"traceutil/trace.go:171","msg":"trace[1475358585] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1010; }","duration":"141.073272ms","start":"2024-04-04T21:32:16.901861Z","end":"2024-04-04T21:32:17.042934Z","steps":["trace[1475358585] 'agreement among raft nodes before linearized reading'  (duration: 140.696974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:32:17.0431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.35846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-04T21:32:17.043152Z","caller":"traceutil/trace.go:171","msg":"trace[785815937] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1010; }","duration":"139.431131ms","start":"2024-04-04T21:32:16.903714Z","end":"2024-04-04T21:32:17.043145Z","steps":["trace[785815937] 'agreement among raft nodes before linearized reading'  (duration: 139.326584ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:32:23.792529Z","caller":"traceutil/trace.go:171","msg":"trace[1679612153] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"102.219943ms","start":"2024-04-04T21:32:23.690293Z","end":"2024-04-04T21:32:23.792513Z","steps":["trace[1679612153] 'process raft request'  (duration: 102.046937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:32:34.663338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.228332ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5264583354604515921 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1056 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T21:32:34.663488Z","caller":"traceutil/trace.go:171","msg":"trace[1381073338] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1135; }","duration":"399.628861ms","start":"2024-04-04T21:32:34.263848Z","end":"2024-04-04T21:32:34.663477Z","steps":["trace[1381073338] 'read index received'  (duration: 155.062739ms)","trace[1381073338] 'applied index is now lower than readState.Index'  (duration: 244.565348ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T21:32:34.663543Z","caller":"traceutil/trace.go:171","msg":"trace[528929940] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"449.997494ms","start":"2024-04-04T21:32:34.213539Z","end":"2024-04-04T21:32:34.663536Z","steps":["trace[528929940] 'process raft request'  (duration: 205.390428ms)","trace[528929940] 'compare'  (duration: 243.984477ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T21:32:34.663586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:32:34.213521Z","time spent":"450.03679ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1056 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-04-04T21:32:34.663748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.440715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11625"}
	{"level":"info","ts":"2024-04-04T21:32:34.663825Z","caller":"traceutil/trace.go:171","msg":"trace[646819891] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1103; }","duration":"344.541774ms","start":"2024-04-04T21:32:34.319274Z","end":"2024-04-04T21:32:34.663815Z","steps":["trace[646819891] 'agreement among raft nodes before linearized reading'  (duration: 344.328912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:32:34.663849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:32:34.31926Z","time spent":"344.582089ms","remote":"127.0.0.1:58910","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11649,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-04T21:32:34.664144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.287965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85367"}
	{"level":"info","ts":"2024-04-04T21:32:34.664172Z","caller":"traceutil/trace.go:171","msg":"trace[1170855978] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1103; }","duration":"400.34942ms","start":"2024-04-04T21:32:34.263815Z","end":"2024-04-04T21:32:34.664165Z","steps":["trace[1170855978] 'agreement among raft nodes before linearized reading'  (duration: 400.176872ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:32:34.664195Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:32:34.263798Z","time spent":"400.392724ms","remote":"127.0.0.1:58910","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85391,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-04-04T21:32:34.664426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.479773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14462"}
	{"level":"info","ts":"2024-04-04T21:32:34.664454Z","caller":"traceutil/trace.go:171","msg":"trace[308086203] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1103; }","duration":"260.594139ms","start":"2024-04-04T21:32:34.403851Z","end":"2024-04-04T21:32:34.664446Z","steps":["trace[308086203] 'agreement among raft nodes before linearized reading'  (duration: 260.438697ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:32:59.899176Z","caller":"traceutil/trace.go:171","msg":"trace[328156628] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"136.611948ms","start":"2024-04-04T21:32:59.762537Z","end":"2024-04-04T21:32:59.899149Z","steps":["trace[328156628] 'process raft request'  (duration: 136.458202ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:34:33.093107Z","caller":"traceutil/trace.go:171","msg":"trace[1215500763] linearizableReadLoop","detail":"{readStateIndex:1853; appliedIndex:1852; }","duration":"140.009837ms","start":"2024-04-04T21:34:32.953071Z","end":"2024-04-04T21:34:33.093081Z","steps":["trace[1215500763] 'read index received'  (duration: 139.878008ms)","trace[1215500763] 'applied index is now lower than readState.Index'  (duration: 131.425µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T21:34:33.093416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T21:34:33.093451Z","caller":"traceutil/trace.go:171","msg":"trace[1945624248] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-attacher; range_end:; response_count:0; response_revision:1783; }","duration":"140.39704ms","start":"2024-04-04T21:34:32.953045Z","end":"2024-04-04T21:34:33.093442Z","steps":["trace[1945624248] 'agreement among raft nodes before linearized reading'  (duration: 140.181226ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:34:33.093717Z","caller":"traceutil/trace.go:171","msg":"trace[971777582] transaction","detail":"{read_only:false; response_revision:1783; number_of_response:1; }","duration":"161.9535ms","start":"2024-04-04T21:34:32.931754Z","end":"2024-04-04T21:34:33.093708Z","steps":["trace[971777582] 'process raft request'  (duration: 161.239095ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:35:06.754782Z","caller":"traceutil/trace.go:171","msg":"trace[1730011966] linearizableReadLoop","detail":"{readStateIndex:1942; appliedIndex:1941; }","duration":"214.612548ms","start":"2024-04-04T21:35:06.540141Z","end":"2024-04-04T21:35:06.754754Z","steps":["trace[1730011966] 'read index received'  (duration: 212.379473ms)","trace[1730011966] 'applied index is now lower than readState.Index'  (duration: 2.231985ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T21:35:06.75498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.799418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T21:35:06.75505Z","caller":"traceutil/trace.go:171","msg":"trace[1426006819] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1864; }","duration":"214.932383ms","start":"2024-04-04T21:35:06.540106Z","end":"2024-04-04T21:35:06.755038Z","steps":["trace[1426006819] 'agreement among raft nodes before linearized reading'  (duration: 214.79669ms)"],"step_count":1}
	
	
	==> gcp-auth [501a83666c0c541ed4c8850c28cc0f08f04b90dcf36917470ef7b4a5a7541c1f] <==
	2024/04/04 21:32:38 GCP Auth Webhook started!
	2024/04/04 21:33:50 Ready to marshal response ...
	2024/04/04 21:33:50 Ready to write response ...
	2024/04/04 21:33:50 Ready to marshal response ...
	2024/04/04 21:33:50 Ready to write response ...
	2024/04/04 21:33:55 Ready to marshal response ...
	2024/04/04 21:33:55 Ready to write response ...
	2024/04/04 21:34:01 Ready to marshal response ...
	2024/04/04 21:34:01 Ready to write response ...
	2024/04/04 21:34:03 Ready to marshal response ...
	2024/04/04 21:34:03 Ready to write response ...
	2024/04/04 21:34:13 Ready to marshal response ...
	2024/04/04 21:34:13 Ready to write response ...
	2024/04/04 21:34:17 Ready to marshal response ...
	2024/04/04 21:34:17 Ready to write response ...
	2024/04/04 21:34:19 Ready to marshal response ...
	2024/04/04 21:34:19 Ready to write response ...
	2024/04/04 21:34:27 Ready to marshal response ...
	2024/04/04 21:34:27 Ready to write response ...
	2024/04/04 21:34:27 Ready to marshal response ...
	2024/04/04 21:34:27 Ready to write response ...
	2024/04/04 21:34:27 Ready to marshal response ...
	2024/04/04 21:34:27 Ready to write response ...
	2024/04/04 21:36:41 Ready to marshal response ...
	2024/04/04 21:36:41 Ready to write response ...
	
	
	==> kernel <==
	 21:36:52 up 6 min,  0 users,  load average: 0.36, 1.20, 0.70
	Linux addons-371778 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d4eeb26158b8c7d387bcfcb089a6bf87240ebf319c906aa4879ad38dc696ffc7] <==
	E0404 21:32:34.924783       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0404 21:32:34.924947       1 timeout.go:142] post-timeout activity - time-elapsed: 11.940278ms, GET "/api/v1/pods" result: <nil>
	I0404 21:34:04.931714       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0404 21:34:17.322241       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.212:8443->10.244.0.28:55070: read: connection reset by peer
	E0404 21:34:19.400067       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0404 21:34:19.632848       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0404 21:34:19.822816       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.165.72"}
	I0404 21:34:20.953254       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0404 21:34:21.999693       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0404 21:34:27.509595       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.142.1"}
	I0404 21:34:34.789284       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0404 21:34:34.789322       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0404 21:34:34.814473       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0404 21:34:34.814711       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0404 21:34:34.824196       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0404 21:34:34.824336       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0404 21:34:34.836829       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0404 21:34:34.836902       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0404 21:34:34.949044       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0404 21:34:34.949111       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0404 21:34:35.824659       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0404 21:34:35.951104       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0404 21:34:35.967065       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0404 21:35:05.558909       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0404 21:36:41.598099       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.254.191"}
	
	
	==> kube-controller-manager [ee2be11fcfff16b0b41d2b2b7941dd9ce0212fca778a5dd4cf38f8ecbc4bb838] <==
	W0404 21:35:38.233279       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:35:38.233613       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0404 21:35:42.224533       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:35:42.224651       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0404 21:35:55.739631       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:35:55.739682       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0404 21:36:01.024135       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:36:01.024189       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0404 21:36:24.939585       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:36:24.939651       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0404 21:36:41.360814       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:36:41.361083       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0404 21:36:41.381471       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0404 21:36:41.434609       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-w4qkl"
	I0404 21:36:41.452219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.333901ms"
	I0404 21:36:41.468322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.964225ms"
	I0404 21:36:41.468693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="140.115µs"
	I0404 21:36:41.480187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="78.967µs"
	I0404 21:36:43.790015       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0404 21:36:43.797241       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0404 21:36:43.804035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="3.882µs"
	I0404 21:36:46.053045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.121165ms"
	I0404 21:36:46.054690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.135µs"
	W0404 21:36:51.441077       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0404 21:36:51.441137       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [49ac21f7dde84b870400b416c3136bf7da83c1fb32b85348d9f857b1b0b5950f] <==
	I0404 21:31:16.160503       1 server_others.go:72] "Using iptables proxy"
	I0404 21:31:16.170304       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	I0404 21:31:16.263565       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 21:31:16.263606       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 21:31:16.263621       1 server_others.go:168] "Using iptables Proxier"
	I0404 21:31:16.272448       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 21:31:16.272910       1 server.go:865] "Version info" version="v1.29.3"
	I0404 21:31:16.272960       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:31:16.274329       1 config.go:188] "Starting service config controller"
	I0404 21:31:16.274457       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 21:31:16.274494       1 config.go:97] "Starting endpoint slice config controller"
	I0404 21:31:16.274502       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 21:31:16.275240       1 config.go:315] "Starting node config controller"
	I0404 21:31:16.275286       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 21:31:16.375323       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0404 21:31:16.375496       1 shared_informer.go:318] Caches are synced for node config
	I0404 21:31:16.375546       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bfce3317a31e882defaca9c4cfc54444108d2c4bd2af93764963ba50cfd96897] <==
	W0404 21:31:00.502330       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 21:31:00.502432       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 21:31:00.513691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0404 21:31:00.513754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0404 21:31:00.514834       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 21:31:00.514873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 21:31:00.645672       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0404 21:31:00.645718       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0404 21:31:00.675492       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0404 21:31:00.675587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0404 21:31:00.717590       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 21:31:00.717759       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 21:31:00.795782       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 21:31:00.796163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 21:31:00.907604       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 21:31:00.907652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 21:31:00.926451       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 21:31:00.926498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 21:31:00.952858       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0404 21:31:00.953021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0404 21:31:00.958589       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0404 21:31:00.958745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0404 21:31:00.976543       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 21:31:00.976608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0404 21:31:02.534539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.445701    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="5605a1b9-ec6c-46bd-a4d6-e7d7f7a5816d" containerName="gadget"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.445714    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d018f5-2166-4bca-aeaa-41b781d57d5f" containerName="volume-snapshot-controller"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.445721    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="472dc225-bbff-4058-ad27-a9e8360750b7" containerName="csi-provisioner"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.445728    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="472dc225-bbff-4058-ad27-a9e8360750b7" containerName="node-driver-registrar"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.445739    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb16fcaa-94a1-42d7-9e0c-9550e4065b97" containerName="local-path-provisioner"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.504168    1283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/295f8e6d-f849-41e7-b4c9-2602055db742-gcp-creds\") pod \"hello-world-app-5d77478584-w4qkl\" (UID: \"295f8e6d-f849-41e7-b4c9-2602055db742\") " pod="default/hello-world-app-5d77478584-w4qkl"
	Apr 04 21:36:41 addons-371778 kubelet[1283]: I0404 21:36:41.504258    1283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qch\" (UniqueName: \"kubernetes.io/projected/295f8e6d-f849-41e7-b4c9-2602055db742-kube-api-access-j5qch\") pod \"hello-world-app-5d77478584-w4qkl\" (UID: \"295f8e6d-f849-41e7-b4c9-2602055db742\") " pod="default/hello-world-app-5d77478584-w4qkl"
	Apr 04 21:36:42 addons-371778 kubelet[1283]: I0404 21:36:42.612867    1283 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw928\" (UniqueName: \"kubernetes.io/projected/c89cb0ef-3601-4100-9a13-ef24f7df1c79-kube-api-access-gw928\") pod \"c89cb0ef-3601-4100-9a13-ef24f7df1c79\" (UID: \"c89cb0ef-3601-4100-9a13-ef24f7df1c79\") "
	Apr 04 21:36:42 addons-371778 kubelet[1283]: I0404 21:36:42.621198    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89cb0ef-3601-4100-9a13-ef24f7df1c79-kube-api-access-gw928" (OuterVolumeSpecName: "kube-api-access-gw928") pod "c89cb0ef-3601-4100-9a13-ef24f7df1c79" (UID: "c89cb0ef-3601-4100-9a13-ef24f7df1c79"). InnerVolumeSpecName "kube-api-access-gw928". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 04 21:36:42 addons-371778 kubelet[1283]: I0404 21:36:42.713651    1283 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gw928\" (UniqueName: \"kubernetes.io/projected/c89cb0ef-3601-4100-9a13-ef24f7df1c79-kube-api-access-gw928\") on node \"addons-371778\" DevicePath \"\""
	Apr 04 21:36:43 addons-371778 kubelet[1283]: I0404 21:36:43.007528    1283 scope.go:117] "RemoveContainer" containerID="c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d"
	Apr 04 21:36:43 addons-371778 kubelet[1283]: I0404 21:36:43.058743    1283 scope.go:117] "RemoveContainer" containerID="c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d"
	Apr 04 21:36:43 addons-371778 kubelet[1283]: E0404 21:36:43.060198    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d\": container with ID starting with c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d not found: ID does not exist" containerID="c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d"
	Apr 04 21:36:43 addons-371778 kubelet[1283]: I0404 21:36:43.060245    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d"} err="failed to get container status \"c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d\": rpc error: code = NotFound desc = could not find container \"c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d\": container with ID starting with c16fd57807e405c02d5465825537360a8e65e63d31290bd2c54734e1c4e4012d not found: ID does not exist"
	Apr 04 21:36:43 addons-371778 kubelet[1283]: I0404 21:36:43.329922    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89cb0ef-3601-4100-9a13-ef24f7df1c79" path="/var/lib/kubelet/pods/c89cb0ef-3601-4100-9a13-ef24f7df1c79/volumes"
	Apr 04 21:36:45 addons-371778 kubelet[1283]: I0404 21:36:45.329506    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748c86df-01e3-4a31-9b89-1ad812781d34" path="/var/lib/kubelet/pods/748c86df-01e3-4a31-9b89-1ad812781d34/volumes"
	Apr 04 21:36:45 addons-371778 kubelet[1283]: I0404 21:36:45.329963    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789815ed-7dff-4847-9ebe-6543bed84702" path="/var/lib/kubelet/pods/789815ed-7dff-4847-9ebe-6543bed84702/volumes"
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.252324    1283 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76jmd\" (UniqueName: \"kubernetes.io/projected/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-kube-api-access-76jmd\") pod \"d4c1ba2f-32ba-427e-b9cf-6f5a047e8980\" (UID: \"d4c1ba2f-32ba-427e-b9cf-6f5a047e8980\") "
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.252477    1283 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-webhook-cert\") pod \"d4c1ba2f-32ba-427e-b9cf-6f5a047e8980\" (UID: \"d4c1ba2f-32ba-427e-b9cf-6f5a047e8980\") "
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.254819    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-kube-api-access-76jmd" (OuterVolumeSpecName: "kube-api-access-76jmd") pod "d4c1ba2f-32ba-427e-b9cf-6f5a047e8980" (UID: "d4c1ba2f-32ba-427e-b9cf-6f5a047e8980"). InnerVolumeSpecName "kube-api-access-76jmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.255512    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d4c1ba2f-32ba-427e-b9cf-6f5a047e8980" (UID: "d4c1ba2f-32ba-427e-b9cf-6f5a047e8980"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.329767    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c1ba2f-32ba-427e-b9cf-6f5a047e8980" path="/var/lib/kubelet/pods/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980/volumes"
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.352811    1283 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-76jmd\" (UniqueName: \"kubernetes.io/projected/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-kube-api-access-76jmd\") on node \"addons-371778\" DevicePath \"\""
	Apr 04 21:36:47 addons-371778 kubelet[1283]: I0404 21:36:47.352870    1283 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4c1ba2f-32ba-427e-b9cf-6f5a047e8980-webhook-cert\") on node \"addons-371778\" DevicePath \"\""
	Apr 04 21:36:48 addons-371778 kubelet[1283]: I0404 21:36:48.061150    1283 scope.go:117] "RemoveContainer" containerID="ed17281608d75d925b54363099b83f9c7b2eb81cb4b48180ce639675abd8cf32"
	
	
	==> storage-provisioner [2a8ae4b9f83ff9ac2399a6afb68e9ce85aaa996f977d64a49916d3e165ab3175] <==
	I0404 21:31:23.914193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 21:31:24.423057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 21:31:24.423230       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 21:31:24.446086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 21:31:24.453180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"069cec93-ea99-4e1e-b1a4-aaccb6b45138", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-371778_569d9ee1-390d-4b86-9d0a-a49ed33e2c23 became leader
	I0404 21:31:24.454177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-371778_569d9ee1-390d-4b86-9d0a-a49ed33e2c23!
	I0404 21:31:24.761905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-371778_569d9ee1-390d-4b86-9d0a-a49ed33e2c23!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-371778 -n addons-371778
helpers_test.go:261: (dbg) Run:  kubectl --context addons-371778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-371778
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-371778: exit status 82 (2m0.484116957s)

                                                
                                                
-- stdout --
	* Stopping node "addons-371778"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-371778" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371778
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-371778: exit status 11 (21.550888913s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-371778" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371778
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-371778: exit status 11 (6.143501771s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-371778" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-371778
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-371778: exit status 11 (6.143773156s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-371778" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image rm gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image rm gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr: (2.396588278s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-596385" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 node stop m02 -v=7 --alsologtostderr
E0404 21:49:18.166581   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:49:31.064649   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:50:52.985229   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.493988759s)

                                                
                                                
-- stdout --
	* Stopping node "ha-454952-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:49:04.561258   25343 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:49:04.561561   25343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:49:04.561626   25343 out.go:304] Setting ErrFile to fd 2...
	I0404 21:49:04.561647   25343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:49:04.562155   25343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:49:04.562805   25343 mustload.go:65] Loading cluster: ha-454952
	I0404 21:49:04.563277   25343 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:49:04.563295   25343 stop.go:39] StopHost: ha-454952-m02
	I0404 21:49:04.563631   25343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:49:04.563673   25343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:49:04.579532   25343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
	I0404 21:49:04.580043   25343 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:49:04.580613   25343 main.go:141] libmachine: Using API Version  1
	I0404 21:49:04.580630   25343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:49:04.581097   25343 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:49:04.583584   25343 out.go:177] * Stopping node "ha-454952-m02"  ...
	I0404 21:49:04.585252   25343 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 21:49:04.585282   25343 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:49:04.585551   25343 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 21:49:04.585574   25343 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:49:04.588352   25343 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:49:04.588773   25343 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:49:04.588800   25343 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:49:04.589000   25343 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:49:04.589187   25343 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:49:04.589386   25343 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:49:04.589548   25343 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:49:04.680578   25343 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 21:49:04.734373   25343 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 21:49:04.789934   25343 main.go:141] libmachine: Stopping "ha-454952-m02"...
	I0404 21:49:04.789962   25343 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:49:04.791616   25343 main.go:141] libmachine: (ha-454952-m02) Calling .Stop
	I0404 21:49:04.796167   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 0/120
	I0404 21:49:05.797514   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 1/120
	I0404 21:49:06.798870   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 2/120
	I0404 21:49:07.800254   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 3/120
	I0404 21:49:08.802583   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 4/120
	I0404 21:49:09.804293   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 5/120
	I0404 21:49:10.806969   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 6/120
	I0404 21:49:11.808393   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 7/120
	I0404 21:49:12.810588   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 8/120
	I0404 21:49:13.811996   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 9/120
	I0404 21:49:14.814431   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 10/120
	I0404 21:49:15.816451   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 11/120
	I0404 21:49:16.818397   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 12/120
	I0404 21:49:17.819954   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 13/120
	I0404 21:49:18.821550   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 14/120
	I0404 21:49:19.823179   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 15/120
	I0404 21:49:20.824568   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 16/120
	I0404 21:49:21.826061   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 17/120
	I0404 21:49:22.827489   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 18/120
	I0404 21:49:23.829844   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 19/120
	I0404 21:49:24.831941   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 20/120
	I0404 21:49:25.833408   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 21/120
	I0404 21:49:26.835098   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 22/120
	I0404 21:49:27.837193   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 23/120
	I0404 21:49:28.838804   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 24/120
	I0404 21:49:29.840961   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 25/120
	I0404 21:49:30.843229   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 26/120
	I0404 21:49:31.844825   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 27/120
	I0404 21:49:32.846610   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 28/120
	I0404 21:49:33.848048   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 29/120
	I0404 21:49:34.850129   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 30/120
	I0404 21:49:35.851513   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 31/120
	I0404 21:49:36.853175   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 32/120
	I0404 21:49:37.855058   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 33/120
	I0404 21:49:38.856541   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 34/120
	I0404 21:49:39.858658   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 35/120
	I0404 21:49:40.860043   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 36/120
	I0404 21:49:41.861606   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 37/120
	I0404 21:49:42.862810   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 38/120
	I0404 21:49:43.864349   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 39/120
	I0404 21:49:44.866669   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 40/120
	I0404 21:49:45.867862   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 41/120
	I0404 21:49:46.869461   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 42/120
	I0404 21:49:47.871466   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 43/120
	I0404 21:49:48.872700   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 44/120
	I0404 21:49:49.874728   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 45/120
	I0404 21:49:50.876145   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 46/120
	I0404 21:49:51.877730   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 47/120
	I0404 21:49:52.879113   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 48/120
	I0404 21:49:53.880412   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 49/120
	I0404 21:49:54.881748   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 50/120
	I0404 21:49:55.883150   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 51/120
	I0404 21:49:56.884487   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 52/120
	I0404 21:49:57.885739   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 53/120
	I0404 21:49:58.887109   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 54/120
	I0404 21:49:59.888390   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 55/120
	I0404 21:50:00.890808   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 56/120
	I0404 21:50:01.892240   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 57/120
	I0404 21:50:02.893610   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 58/120
	I0404 21:50:03.895182   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 59/120
	I0404 21:50:04.897554   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 60/120
	I0404 21:50:05.899091   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 61/120
	I0404 21:50:06.900730   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 62/120
	I0404 21:50:07.903050   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 63/120
	I0404 21:50:08.904332   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 64/120
	I0404 21:50:09.906473   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 65/120
	I0404 21:50:10.908043   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 66/120
	I0404 21:50:11.909209   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 67/120
	I0404 21:50:12.910839   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 68/120
	I0404 21:50:13.912245   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 69/120
	I0404 21:50:14.914602   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 70/120
	I0404 21:50:15.916377   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 71/120
	I0404 21:50:16.918554   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 72/120
	I0404 21:50:17.920066   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 73/120
	I0404 21:50:18.921847   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 74/120
	I0404 21:50:19.924051   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 75/120
	I0404 21:50:20.926336   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 76/120
	I0404 21:50:21.927955   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 77/120
	I0404 21:50:22.930059   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 78/120
	I0404 21:50:23.931539   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 79/120
	I0404 21:50:24.933941   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 80/120
	I0404 21:50:25.935305   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 81/120
	I0404 21:50:26.936915   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 82/120
	I0404 21:50:27.938814   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 83/120
	I0404 21:50:28.940032   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 84/120
	I0404 21:50:29.942024   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 85/120
	I0404 21:50:30.943837   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 86/120
	I0404 21:50:31.945179   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 87/120
	I0404 21:50:32.946493   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 88/120
	I0404 21:50:33.948689   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 89/120
	I0404 21:50:34.950854   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 90/120
	I0404 21:50:35.952161   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 91/120
	I0404 21:50:36.954372   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 92/120
	I0404 21:50:37.955537   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 93/120
	I0404 21:50:38.956864   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 94/120
	I0404 21:50:39.958610   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 95/120
	I0404 21:50:40.959987   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 96/120
	I0404 21:50:41.961468   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 97/120
	I0404 21:50:42.963891   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 98/120
	I0404 21:50:43.965318   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 99/120
	I0404 21:50:44.967447   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 100/120
	I0404 21:50:45.969454   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 101/120
	I0404 21:50:46.971003   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 102/120
	I0404 21:50:47.972552   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 103/120
	I0404 21:50:48.974559   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 104/120
	I0404 21:50:49.976658   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 105/120
	I0404 21:50:50.978262   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 106/120
	I0404 21:50:51.980051   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 107/120
	I0404 21:50:52.981893   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 108/120
	I0404 21:50:53.983544   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 109/120
	I0404 21:50:54.985089   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 110/120
	I0404 21:50:55.986608   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 111/120
	I0404 21:50:56.987931   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 112/120
	I0404 21:50:57.989339   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 113/120
	I0404 21:50:58.990881   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 114/120
	I0404 21:50:59.992861   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 115/120
	I0404 21:51:00.994691   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 116/120
	I0404 21:51:01.996112   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 117/120
	I0404 21:51:02.997964   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 118/120
	I0404 21:51:04.000600   25343 main.go:141] libmachine: (ha-454952-m02) Waiting for machine to stop 119/120
	I0404 21:51:05.001235   25343 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 21:51:05.001368   25343 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-454952 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (19.286114898s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:05.060677   25673 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:05.060809   25673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:05.060821   25673 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:05.060826   25673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:05.061065   25673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:05.061282   25673 out.go:298] Setting JSON to false
	I0404 21:51:05.061311   25673 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:05.061382   25673 notify.go:220] Checking for updates...
	I0404 21:51:05.061849   25673 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:05.061870   25673 status.go:255] checking status of ha-454952 ...
	I0404 21:51:05.062382   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.062427   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.080868   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0404 21:51:05.081356   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.082000   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.082024   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.082476   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.082717   25673 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:05.084603   25673 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:05.084621   25673 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:05.084959   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.084997   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.100019   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0404 21:51:05.100558   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.101058   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.101089   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.101475   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.101693   25673 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:05.105136   25673 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:05.105692   25673 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:05.105743   25673 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:05.106002   25673 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:05.106335   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.106378   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.121342   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0404 21:51:05.121878   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.122386   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.122410   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.122738   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.122949   25673 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:05.123131   25673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:05.123162   25673 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:05.126245   25673 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:05.126765   25673 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:05.126795   25673 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:05.126866   25673 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:05.127010   25673 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:05.127162   25673 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:05.127306   25673 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:05.217080   25673 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:05.225061   25673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:05.250836   25673 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:05.250888   25673 api_server.go:166] Checking apiserver status ...
	I0404 21:51:05.250945   25673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:05.269569   25673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:05.285237   25673 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:05.285296   25673 ssh_runner.go:195] Run: ls
	I0404 21:51:05.290253   25673 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:05.295098   25673 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:05.295125   25673 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:05.295137   25673 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:05.295161   25673 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:05.295596   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.295643   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.311248   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0404 21:51:05.311731   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.312276   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.312291   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.312693   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.312918   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:05.314687   25673 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:05.314703   25673 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:05.315032   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.315089   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.330006   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0404 21:51:05.330445   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.330999   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.331029   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.331434   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.331645   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:05.334932   25673 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:05.335317   25673 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:05.335352   25673 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:05.335569   25673 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:05.335869   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:05.335915   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:05.351007   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0404 21:51:05.351427   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:05.351990   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:05.352012   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:05.352356   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:05.352616   25673 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:05.352875   25673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:05.352896   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:05.356453   25673 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:05.356906   25673 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:05.356943   25673 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:05.357081   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:05.357281   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:05.357449   25673 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:05.357596   25673 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:51:23.908389   25673 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:23.908462   25673 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:51:23.908475   25673 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:23.908485   25673 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:51:23.908506   25673 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:23.908536   25673 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:51:23.908841   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:23.908902   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:23.925623   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0404 21:51:23.926464   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:23.928093   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:23.928113   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:23.928586   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:23.928798   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:51:23.930712   25673 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:51:23.930734   25673 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:23.931182   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:23.931223   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:23.945782   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I0404 21:51:23.946198   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:23.946698   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:23.946772   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:23.947140   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:23.947369   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:51:23.950106   25673 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:23.950504   25673 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:23.950526   25673 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:23.950639   25673 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:23.950976   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:23.951019   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:23.965786   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0404 21:51:23.966217   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:23.966674   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:23.966696   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:23.967072   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:23.967260   25673 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:51:23.967449   25673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:23.967472   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:51:23.970354   25673 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:23.970787   25673 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:23.970842   25673 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:23.970923   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:51:23.971107   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:51:23.971249   25673 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:51:23.971401   25673 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:51:24.060482   25673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:24.081985   25673 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:24.082020   25673 api_server.go:166] Checking apiserver status ...
	I0404 21:51:24.082062   25673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:24.099611   25673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:51:24.111990   25673 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:24.112053   25673 ssh_runner.go:195] Run: ls
	I0404 21:51:24.117393   25673 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:24.122960   25673 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:24.122983   25673 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:51:24.122994   25673 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:24.123021   25673 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:51:24.123299   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:24.123338   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:24.138089   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0404 21:51:24.138496   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:24.139044   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:24.139060   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:24.139408   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:24.139611   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:51:24.141542   25673 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:51:24.141560   25673 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:24.141840   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:24.141873   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:24.156900   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0404 21:51:24.157281   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:24.157732   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:24.157757   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:24.158110   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:24.158278   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:51:24.161125   25673 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:24.161535   25673 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:24.161564   25673 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:24.161695   25673 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:24.161979   25673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:24.162015   25673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:24.176600   25673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0404 21:51:24.176984   25673 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:24.177384   25673 main.go:141] libmachine: Using API Version  1
	I0404 21:51:24.177402   25673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:24.177724   25673 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:24.177943   25673 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:51:24.178134   25673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:24.178154   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:51:24.180894   25673 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:24.181314   25673 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:24.181345   25673 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:24.181459   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:51:24.181628   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:51:24.181781   25673 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:51:24.181939   25673 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:51:24.269971   25673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:24.286784   25673 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-454952 -n ha-454952
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-454952 logs -n 25: (1.642453913s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m03_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m04 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp testdata/cp-test.txt                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m04_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03:/home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m03 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-454952 node stop m02 -v=7                                                     | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:44:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:44:02.650394   21531 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:44:02.650607   21531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:44:02.650616   21531 out.go:304] Setting ErrFile to fd 2...
	I0404 21:44:02.650620   21531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:44:02.650826   21531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:44:02.651386   21531 out.go:298] Setting JSON to false
	I0404 21:44:02.652235   21531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1588,"bootTime":1712265455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:44:02.652297   21531 start.go:139] virtualization: kvm guest
	I0404 21:44:02.654291   21531 out.go:177] * [ha-454952] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:44:02.655636   21531 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:44:02.657036   21531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:44:02.655660   21531 notify.go:220] Checking for updates...
	I0404 21:44:02.659755   21531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:02.661170   21531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:02.662602   21531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:44:02.663918   21531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:44:02.665410   21531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:44:02.700312   21531 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 21:44:02.701877   21531 start.go:297] selected driver: kvm2
	I0404 21:44:02.701907   21531 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:44:02.701919   21531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:44:02.702602   21531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:44:02.702713   21531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:44:02.717645   21531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:44:02.717726   21531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:44:02.717927   21531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:44:02.717977   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:02.717988   21531 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0404 21:44:02.717993   21531 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0404 21:44:02.718036   21531 start.go:340] cluster config:
	{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0404 21:44:02.718119   21531 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:44:02.720241   21531 out.go:177] * Starting "ha-454952" primary control-plane node in "ha-454952" cluster
	I0404 21:44:02.721812   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:44:02.721859   21531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:44:02.721868   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:44:02.721945   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:44:02.721956   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:44:02.722293   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:44:02.722316   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json: {Name:mk4e70ee4269c9cb59f2948d042f0e4baab49cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:02.722443   21531 start.go:360] acquireMachinesLock for ha-454952: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:44:02.722477   21531 start.go:364] duration metric: took 21.698µs to acquireMachinesLock for "ha-454952"
	I0404 21:44:02.722496   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:44:02.722554   21531 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 21:44:02.724484   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:44:02.724632   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:02.724674   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:02.738825   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0404 21:44:02.739320   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:02.739884   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:02.739905   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:02.740267   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:02.740494   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:02.740655   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:02.740912   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:44:02.740964   21531 client.go:168] LocalClient.Create starting
	I0404 21:44:02.741006   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:44:02.741067   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:44:02.741092   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:44:02.741161   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:44:02.741187   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:44:02.741204   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:44:02.741228   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:44:02.741247   21531 main.go:141] libmachine: (ha-454952) Calling .PreCreateCheck
	I0404 21:44:02.741602   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:02.742069   21531 main.go:141] libmachine: Creating machine...
	I0404 21:44:02.742086   21531 main.go:141] libmachine: (ha-454952) Calling .Create
	I0404 21:44:02.742265   21531 main.go:141] libmachine: (ha-454952) Creating KVM machine...
	I0404 21:44:02.743630   21531 main.go:141] libmachine: (ha-454952) DBG | found existing default KVM network
	I0404 21:44:02.744377   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:02.744215   21554 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0404 21:44:02.744409   21531 main.go:141] libmachine: (ha-454952) DBG | created network xml: 
	I0404 21:44:02.744428   21531 main.go:141] libmachine: (ha-454952) DBG | <network>
	I0404 21:44:02.744438   21531 main.go:141] libmachine: (ha-454952) DBG |   <name>mk-ha-454952</name>
	I0404 21:44:02.744458   21531 main.go:141] libmachine: (ha-454952) DBG |   <dns enable='no'/>
	I0404 21:44:02.744468   21531 main.go:141] libmachine: (ha-454952) DBG |   
	I0404 21:44:02.744479   21531 main.go:141] libmachine: (ha-454952) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 21:44:02.744484   21531 main.go:141] libmachine: (ha-454952) DBG |     <dhcp>
	I0404 21:44:02.744492   21531 main.go:141] libmachine: (ha-454952) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 21:44:02.744498   21531 main.go:141] libmachine: (ha-454952) DBG |     </dhcp>
	I0404 21:44:02.744521   21531 main.go:141] libmachine: (ha-454952) DBG |   </ip>
	I0404 21:44:02.744545   21531 main.go:141] libmachine: (ha-454952) DBG |   
	I0404 21:44:02.744562   21531 main.go:141] libmachine: (ha-454952) DBG | </network>
	I0404 21:44:02.744575   21531 main.go:141] libmachine: (ha-454952) DBG | 
	I0404 21:44:02.749979   21531 main.go:141] libmachine: (ha-454952) DBG | trying to create private KVM network mk-ha-454952 192.168.39.0/24...
	I0404 21:44:02.815031   21531 main.go:141] libmachine: (ha-454952) DBG | private KVM network mk-ha-454952 192.168.39.0/24 created
	I0404 21:44:02.815062   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:02.815011   21554 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:02.815071   21531 main.go:141] libmachine: (ha-454952) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 ...
	I0404 21:44:02.815081   21531 main.go:141] libmachine: (ha-454952) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:44:02.815130   21531 main.go:141] libmachine: (ha-454952) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:44:03.040505   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.040387   21554 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa...
	I0404 21:44:03.155462   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.155291   21554 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/ha-454952.rawdisk...
	I0404 21:44:03.155494   21531 main.go:141] libmachine: (ha-454952) DBG | Writing magic tar header
	I0404 21:44:03.155508   21531 main.go:141] libmachine: (ha-454952) DBG | Writing SSH key tar header
	I0404 21:44:03.155519   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.155407   21554 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 ...
	I0404 21:44:03.155533   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 (perms=drwx------)
	I0404 21:44:03.155547   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:44:03.155555   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:44:03.155562   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:44:03.155567   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:44:03.155575   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:44:03.155581   21531 main.go:141] libmachine: (ha-454952) Creating domain...
	I0404 21:44:03.155616   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952
	I0404 21:44:03.155675   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:44:03.155693   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:03.155704   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:44:03.155737   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:44:03.155750   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:44:03.155764   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home
	I0404 21:44:03.155783   21531 main.go:141] libmachine: (ha-454952) DBG | Skipping /home - not owner
	I0404 21:44:03.156871   21531 main.go:141] libmachine: (ha-454952) define libvirt domain using xml: 
	I0404 21:44:03.156895   21531 main.go:141] libmachine: (ha-454952) <domain type='kvm'>
	I0404 21:44:03.156903   21531 main.go:141] libmachine: (ha-454952)   <name>ha-454952</name>
	I0404 21:44:03.156908   21531 main.go:141] libmachine: (ha-454952)   <memory unit='MiB'>2200</memory>
	I0404 21:44:03.156914   21531 main.go:141] libmachine: (ha-454952)   <vcpu>2</vcpu>
	I0404 21:44:03.156919   21531 main.go:141] libmachine: (ha-454952)   <features>
	I0404 21:44:03.156924   21531 main.go:141] libmachine: (ha-454952)     <acpi/>
	I0404 21:44:03.156927   21531 main.go:141] libmachine: (ha-454952)     <apic/>
	I0404 21:44:03.156934   21531 main.go:141] libmachine: (ha-454952)     <pae/>
	I0404 21:44:03.156941   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.156950   21531 main.go:141] libmachine: (ha-454952)   </features>
	I0404 21:44:03.156959   21531 main.go:141] libmachine: (ha-454952)   <cpu mode='host-passthrough'>
	I0404 21:44:03.156968   21531 main.go:141] libmachine: (ha-454952)   
	I0404 21:44:03.156986   21531 main.go:141] libmachine: (ha-454952)   </cpu>
	I0404 21:44:03.156998   21531 main.go:141] libmachine: (ha-454952)   <os>
	I0404 21:44:03.157006   21531 main.go:141] libmachine: (ha-454952)     <type>hvm</type>
	I0404 21:44:03.157011   21531 main.go:141] libmachine: (ha-454952)     <boot dev='cdrom'/>
	I0404 21:44:03.157018   21531 main.go:141] libmachine: (ha-454952)     <boot dev='hd'/>
	I0404 21:44:03.157027   21531 main.go:141] libmachine: (ha-454952)     <bootmenu enable='no'/>
	I0404 21:44:03.157037   21531 main.go:141] libmachine: (ha-454952)   </os>
	I0404 21:44:03.157065   21531 main.go:141] libmachine: (ha-454952)   <devices>
	I0404 21:44:03.157090   21531 main.go:141] libmachine: (ha-454952)     <disk type='file' device='cdrom'>
	I0404 21:44:03.157110   21531 main.go:141] libmachine: (ha-454952)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/boot2docker.iso'/>
	I0404 21:44:03.157125   21531 main.go:141] libmachine: (ha-454952)       <target dev='hdc' bus='scsi'/>
	I0404 21:44:03.157139   21531 main.go:141] libmachine: (ha-454952)       <readonly/>
	I0404 21:44:03.157148   21531 main.go:141] libmachine: (ha-454952)     </disk>
	I0404 21:44:03.157157   21531 main.go:141] libmachine: (ha-454952)     <disk type='file' device='disk'>
	I0404 21:44:03.157165   21531 main.go:141] libmachine: (ha-454952)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:44:03.157174   21531 main.go:141] libmachine: (ha-454952)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/ha-454952.rawdisk'/>
	I0404 21:44:03.157183   21531 main.go:141] libmachine: (ha-454952)       <target dev='hda' bus='virtio'/>
	I0404 21:44:03.157191   21531 main.go:141] libmachine: (ha-454952)     </disk>
	I0404 21:44:03.157203   21531 main.go:141] libmachine: (ha-454952)     <interface type='network'>
	I0404 21:44:03.157216   21531 main.go:141] libmachine: (ha-454952)       <source network='mk-ha-454952'/>
	I0404 21:44:03.157227   21531 main.go:141] libmachine: (ha-454952)       <model type='virtio'/>
	I0404 21:44:03.157235   21531 main.go:141] libmachine: (ha-454952)     </interface>
	I0404 21:44:03.157243   21531 main.go:141] libmachine: (ha-454952)     <interface type='network'>
	I0404 21:44:03.157253   21531 main.go:141] libmachine: (ha-454952)       <source network='default'/>
	I0404 21:44:03.157261   21531 main.go:141] libmachine: (ha-454952)       <model type='virtio'/>
	I0404 21:44:03.157284   21531 main.go:141] libmachine: (ha-454952)     </interface>
	I0404 21:44:03.157306   21531 main.go:141] libmachine: (ha-454952)     <serial type='pty'>
	I0404 21:44:03.157325   21531 main.go:141] libmachine: (ha-454952)       <target port='0'/>
	I0404 21:44:03.157341   21531 main.go:141] libmachine: (ha-454952)     </serial>
	I0404 21:44:03.157350   21531 main.go:141] libmachine: (ha-454952)     <console type='pty'>
	I0404 21:44:03.157371   21531 main.go:141] libmachine: (ha-454952)       <target type='serial' port='0'/>
	I0404 21:44:03.157385   21531 main.go:141] libmachine: (ha-454952)     </console>
	I0404 21:44:03.157395   21531 main.go:141] libmachine: (ha-454952)     <rng model='virtio'>
	I0404 21:44:03.157409   21531 main.go:141] libmachine: (ha-454952)       <backend model='random'>/dev/random</backend>
	I0404 21:44:03.157419   21531 main.go:141] libmachine: (ha-454952)     </rng>
	I0404 21:44:03.157432   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.157439   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.157474   21531 main.go:141] libmachine: (ha-454952)   </devices>
	I0404 21:44:03.157491   21531 main.go:141] libmachine: (ha-454952) </domain>
	I0404 21:44:03.157502   21531 main.go:141] libmachine: (ha-454952) 
	I0404 21:44:03.161889   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:bd:22:8e in network default
	I0404 21:44:03.162497   21531 main.go:141] libmachine: (ha-454952) Ensuring networks are active...
	I0404 21:44:03.162516   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:03.163268   21531 main.go:141] libmachine: (ha-454952) Ensuring network default is active
	I0404 21:44:03.163590   21531 main.go:141] libmachine: (ha-454952) Ensuring network mk-ha-454952 is active
	I0404 21:44:03.164228   21531 main.go:141] libmachine: (ha-454952) Getting domain xml...
	I0404 21:44:03.165032   21531 main.go:141] libmachine: (ha-454952) Creating domain...
	I0404 21:44:04.361667   21531 main.go:141] libmachine: (ha-454952) Waiting to get IP...
	I0404 21:44:04.362712   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:04.363169   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:04.363190   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:04.363153   21554 retry.go:31] will retry after 295.412756ms: waiting for machine to come up
	I0404 21:44:04.660648   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:04.661103   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:04.661126   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:04.661058   21554 retry.go:31] will retry after 377.487782ms: waiting for machine to come up
	I0404 21:44:05.040684   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.041058   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.041090   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.041004   21554 retry.go:31] will retry after 338.171412ms: waiting for machine to come up
	I0404 21:44:05.380606   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.381050   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.381072   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.381020   21554 retry.go:31] will retry after 586.830945ms: waiting for machine to come up
	I0404 21:44:05.969744   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.970148   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.970182   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.970099   21554 retry.go:31] will retry after 507.958651ms: waiting for machine to come up
	I0404 21:44:06.479955   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:06.480413   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:06.480435   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:06.480362   21554 retry.go:31] will retry after 732.782622ms: waiting for machine to come up
	I0404 21:44:07.214391   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:07.214799   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:07.214843   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:07.214752   21554 retry.go:31] will retry after 1.155748181s: waiting for machine to come up
	I0404 21:44:08.373262   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:08.373700   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:08.373727   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:08.373649   21554 retry.go:31] will retry after 1.039318253s: waiting for machine to come up
	I0404 21:44:09.414830   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:09.415361   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:09.415391   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:09.415320   21554 retry.go:31] will retry after 1.419610359s: waiting for machine to come up
	I0404 21:44:10.836320   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:10.836872   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:10.836905   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:10.836729   21554 retry.go:31] will retry after 1.868110352s: waiting for machine to come up
	I0404 21:44:12.707917   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:12.708396   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:12.708423   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:12.708338   21554 retry.go:31] will retry after 1.901548289s: waiting for machine to come up
	I0404 21:44:14.611238   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:14.611713   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:14.611740   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:14.611667   21554 retry.go:31] will retry after 3.155171492s: waiting for machine to come up
	I0404 21:44:17.768546   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:17.769049   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:17.769076   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:17.769006   21554 retry.go:31] will retry after 4.202788757s: waiting for machine to come up
	I0404 21:44:21.976393   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:21.976825   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:21.976889   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:21.976804   21554 retry.go:31] will retry after 4.385711421s: waiting for machine to come up
	I0404 21:44:26.367198   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.367737   21531 main.go:141] libmachine: (ha-454952) Found IP for machine: 192.168.39.13
	I0404 21:44:26.367850   21531 main.go:141] libmachine: (ha-454952) Reserving static IP address...
	I0404 21:44:26.367871   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has current primary IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.368150   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find host DHCP lease matching {name: "ha-454952", mac: "52:54:00:39:86:be", ip: "192.168.39.13"} in network mk-ha-454952
	I0404 21:44:26.441469   21531 main.go:141] libmachine: (ha-454952) DBG | Getting to WaitForSSH function...
	I0404 21:44:26.441503   21531 main.go:141] libmachine: (ha-454952) Reserved static IP address: 192.168.39.13
	I0404 21:44:26.441516   21531 main.go:141] libmachine: (ha-454952) Waiting for SSH to be available...
	I0404 21:44:26.444532   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.445011   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.445046   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.445188   21531 main.go:141] libmachine: (ha-454952) DBG | Using SSH client type: external
	I0404 21:44:26.445219   21531 main.go:141] libmachine: (ha-454952) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa (-rw-------)
	I0404 21:44:26.445265   21531 main.go:141] libmachine: (ha-454952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:44:26.445281   21531 main.go:141] libmachine: (ha-454952) DBG | About to run SSH command:
	I0404 21:44:26.445294   21531 main.go:141] libmachine: (ha-454952) DBG | exit 0
	I0404 21:44:26.576310   21531 main.go:141] libmachine: (ha-454952) DBG | SSH cmd err, output: <nil>: 
	I0404 21:44:26.576556   21531 main.go:141] libmachine: (ha-454952) KVM machine creation complete!
	I0404 21:44:26.576934   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:26.577438   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:26.577631   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:26.577815   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:44:26.577827   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:26.579195   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:44:26.579209   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:44:26.579215   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:44:26.579221   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.581224   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.581580   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.581607   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.581716   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.581897   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.582035   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.582188   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.582388   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.582583   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.582596   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:44:26.695872   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:44:26.695909   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:44:26.695925   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.698471   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.698852   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.698882   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.699019   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.699219   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.699376   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.699514   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.699684   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.699877   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.699891   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:44:26.813300   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:44:26.813393   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:44:26.813408   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:44:26.813423   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:26.813658   21531 buildroot.go:166] provisioning hostname "ha-454952"
	I0404 21:44:26.813678   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:26.813879   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.816475   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.816853   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.816873   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.817084   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.817246   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.817407   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.817572   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.817720   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.817879   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.817893   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952 && echo "ha-454952" | sudo tee /etc/hostname
	I0404 21:44:26.953477   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:44:26.953501   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.955918   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.956254   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.956281   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.956435   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.956605   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.956764   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.956900   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.957062   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.957268   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.957303   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:44:27.085734   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:44:27.085763   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:44:27.085800   21531 buildroot.go:174] setting up certificates
	I0404 21:44:27.085814   21531 provision.go:84] configureAuth start
	I0404 21:44:27.085826   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:27.086102   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:27.088723   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.089070   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.089097   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.089278   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.091279   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.091540   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.091565   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.091736   21531 provision.go:143] copyHostCerts
	I0404 21:44:27.091762   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:44:27.091798   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:44:27.091807   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:44:27.091867   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:44:27.091933   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:44:27.091955   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:44:27.091962   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:44:27.091985   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:44:27.092021   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:44:27.092037   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:44:27.092043   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:44:27.092062   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:44:27.092101   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952 san=[127.0.0.1 192.168.39.13 ha-454952 localhost minikube]
	I0404 21:44:27.342904   21531 provision.go:177] copyRemoteCerts
	I0404 21:44:27.342956   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:44:27.342975   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.345785   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.346132   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.346166   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.346322   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.346522   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.346670   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.346786   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:27.440021   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:44:27.440096   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:44:27.469815   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:44:27.469870   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0404 21:44:27.496876   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:44:27.496932   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 21:44:27.524389   21531 provision.go:87] duration metric: took 438.562222ms to configureAuth
	I0404 21:44:27.524411   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:44:27.524565   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:44:27.524631   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.527186   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.527530   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.527550   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.527750   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.527913   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.528041   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.528174   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.528313   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:27.528464   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:27.528478   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:44:27.811117   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:44:27.811149   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:44:27.811159   21531 main.go:141] libmachine: (ha-454952) Calling .GetURL
	I0404 21:44:27.812329   21531 main.go:141] libmachine: (ha-454952) DBG | Using libvirt version 6000000
	I0404 21:44:27.814505   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.814878   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.814905   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.815034   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:44:27.815050   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:44:27.815058   21531 client.go:171] duration metric: took 25.07408183s to LocalClient.Create
	I0404 21:44:27.815077   21531 start.go:167] duration metric: took 25.074167258s to libmachine.API.Create "ha-454952"
	I0404 21:44:27.815085   21531 start.go:293] postStartSetup for "ha-454952" (driver="kvm2")
	I0404 21:44:27.815094   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:44:27.815115   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:27.815309   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:44:27.815328   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.817163   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.817438   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.817461   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.817634   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.817783   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.817942   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.818039   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:27.906609   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:44:27.911083   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:44:27.911100   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:44:27.911174   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:44:27.911268   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:44:27.911282   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:44:27.911417   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:44:27.921755   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:44:27.946614   21531 start.go:296] duration metric: took 131.516007ms for postStartSetup
	I0404 21:44:27.946659   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:27.947234   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:27.949891   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.950293   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.950327   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.950485   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:44:27.950675   21531 start.go:128] duration metric: took 25.228112122s to createHost
	I0404 21:44:27.950701   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.953337   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.953692   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.953710   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.953840   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.953986   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.954127   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.954248   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.954409   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:27.954572   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:27.954590   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:44:28.069250   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267068.043455500
	
	I0404 21:44:28.069276   21531 fix.go:216] guest clock: 1712267068.043455500
	I0404 21:44:28.069283   21531 fix.go:229] Guest: 2024-04-04 21:44:28.0434555 +0000 UTC Remote: 2024-04-04 21:44:27.950687712 +0000 UTC m=+25.347320907 (delta=92.767788ms)
	I0404 21:44:28.069302   21531 fix.go:200] guest clock delta is within tolerance: 92.767788ms
	I0404 21:44:28.069307   21531 start.go:83] releasing machines lock for "ha-454952", held for 25.346821713s
	I0404 21:44:28.069325   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.069571   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:28.072197   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.072579   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.072605   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.072752   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073339   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073505   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073602   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:44:28.073641   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:28.073650   21531 ssh_runner.go:195] Run: cat /version.json
	I0404 21:44:28.073662   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:28.075990   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076324   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.076352   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076376   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076506   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:28.076679   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:28.076791   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.076817   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076840   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:28.076948   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:28.077018   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:28.077110   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:28.077250   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:28.077420   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:28.161876   21531 ssh_runner.go:195] Run: systemctl --version
	I0404 21:44:28.196517   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:44:28.365546   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:44:28.371823   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:44:28.371886   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:44:28.389245   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:44:28.389266   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:44:28.389343   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:44:28.408113   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:44:28.425185   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:44:28.425234   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:44:28.440355   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:44:28.456055   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:44:28.579016   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:44:28.730964   21531 docker.go:233] disabling docker service ...
	I0404 21:44:28.731038   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:44:28.747024   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:44:28.760738   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:44:28.894085   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:44:29.037863   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:44:29.053162   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:44:29.072981   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:44:29.073044   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.084318   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:44:29.084391   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.095696   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.106440   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.117716   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:44:29.129015   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.139990   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.158444   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.171998   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:44:29.183910   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:44:29.183971   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:44:29.199116   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:44:29.210129   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:44:29.340830   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:44:29.494180   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:44:29.494265   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:44:29.500266   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:44:29.500352   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:44:29.504228   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:44:29.545448   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:44:29.545540   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:44:29.575479   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:44:29.608745   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:44:29.610316   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:29.612701   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:29.612985   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:29.613010   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:29.613173   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:44:29.617489   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:44:29.631869   21531 kubeadm.go:877] updating cluster {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 21:44:29.631987   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:44:29.632032   21531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:44:29.667707   21531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 21:44:29.667791   21531 ssh_runner.go:195] Run: which lz4
	I0404 21:44:29.672037   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0404 21:44:29.672145   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 21:44:29.676449   21531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 21:44:29.676475   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 21:44:31.207161   21531 crio.go:462] duration metric: took 1.535055588s to copy over tarball
	I0404 21:44:31.207271   21531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 21:44:33.536211   21531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.328913592s)
	I0404 21:44:33.536247   21531 crio.go:469] duration metric: took 2.329050777s to extract the tarball
	I0404 21:44:33.536256   21531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 21:44:33.575332   21531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:44:33.623579   21531 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:44:33.623604   21531 cache_images.go:84] Images are preloaded, skipping loading
	I0404 21:44:33.623613   21531 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.29.3 crio true true} ...
	I0404 21:44:33.623744   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:44:33.623819   21531 ssh_runner.go:195] Run: crio config
	I0404 21:44:33.672380   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:33.672404   21531 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0404 21:44:33.672414   21531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 21:44:33.672434   21531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-454952 NodeName:ha-454952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 21:44:33.672583   21531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-454952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 21:44:33.672613   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:44:33.672662   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:44:33.692154   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:44:33.692294   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:44:33.692360   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:44:33.706668   21531 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 21:44:33.706753   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0404 21:44:33.719047   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0404 21:44:33.738743   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:44:33.759868   21531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0404 21:44:33.780371   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0404 21:44:33.799501   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:44:33.803857   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:44:33.816893   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:44:33.944901   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:44:33.963225   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.13
	I0404 21:44:33.963277   21531 certs.go:194] generating shared ca certs ...
	I0404 21:44:33.963295   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:33.963454   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:44:33.963514   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:44:33.963527   21531 certs.go:256] generating profile certs ...
	I0404 21:44:33.963592   21531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:44:33.963610   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt with IP's: []
	I0404 21:44:34.310349   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt ...
	I0404 21:44:34.310378   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt: {Name:mk842cef776f49e0c375e16a164e1b4ec24172f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.310568   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key ...
	I0404 21:44:34.310583   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key: {Name:mk2d8b7056432b32bc7806de3137cd82157befd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.310685   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e
	I0404 21:44:34.310702   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.254]
	I0404 21:44:34.519722   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e ...
	I0404 21:44:34.519746   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e: {Name:mkfae809a19680d483855c0b76ce3d3985f98122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.519896   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e ...
	I0404 21:44:34.519913   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e: {Name:mk6d2209e949a7d3510c9ad4e0a6814435e4ca2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.520005   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:44:34.520079   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:44:34.520163   21531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:44:34.520183   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt with IP's: []
	I0404 21:44:34.629377   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt ...
	I0404 21:44:34.629412   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt: {Name:mkecf129b5a1480677134f643f060ec7d6af66af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.629609   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key ...
	I0404 21:44:34.629626   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key: {Name:mkde4a9612453c27dcf447317eaa0c633a0f5e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.629734   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:44:34.629755   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:44:34.629767   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:44:34.629780   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:44:34.629791   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:44:34.629807   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:44:34.629821   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:44:34.629836   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:44:34.629893   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:44:34.629939   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:44:34.629948   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:44:34.629977   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:44:34.630002   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:44:34.630026   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:44:34.630066   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:44:34.630101   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:44:34.630118   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.630130   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:44:34.631167   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:44:34.663439   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:44:34.689465   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:44:34.714884   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:44:34.745497   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 21:44:34.791169   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 21:44:34.828644   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:44:34.853978   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:44:34.878857   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:44:34.903967   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:44:34.929361   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:44:34.955370   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 21:44:34.973332   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:44:34.979428   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:44:34.991663   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.996625   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.996685   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:35.002750   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:44:35.015463   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:44:35.027354   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.031938   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.031984   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.037666   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:44:35.049548   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:44:35.061358   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.066041   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.066106   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.071886   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:44:35.084199   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:44:35.088572   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:44:35.088630   21531 kubeadm.go:391] StartCluster: {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:44:35.088727   21531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 21:44:35.088799   21531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 21:44:35.128476   21531 cri.go:89] found id: ""
	I0404 21:44:35.128549   21531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 21:44:35.139591   21531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 21:44:35.150620   21531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 21:44:35.161410   21531 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 21:44:35.161438   21531 kubeadm.go:156] found existing configuration files:
	
	I0404 21:44:35.161491   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 21:44:35.171678   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 21:44:35.171750   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 21:44:35.182280   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 21:44:35.192492   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 21:44:35.192563   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 21:44:35.203920   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 21:44:35.214551   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 21:44:35.214613   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 21:44:35.225542   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 21:44:35.236489   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 21:44:35.236546   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 21:44:35.247545   21531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 21:44:35.504554   21531 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 21:44:46.667176   21531 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 21:44:46.667234   21531 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 21:44:46.667375   21531 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 21:44:46.667503   21531 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 21:44:46.667627   21531 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 21:44:46.667730   21531 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 21:44:46.669460   21531 out.go:204]   - Generating certificates and keys ...
	I0404 21:44:46.669539   21531 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 21:44:46.669638   21531 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 21:44:46.669740   21531 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0404 21:44:46.669825   21531 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0404 21:44:46.669917   21531 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0404 21:44:46.669994   21531 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0404 21:44:46.670082   21531 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0404 21:44:46.670236   21531 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-454952 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0404 21:44:46.670325   21531 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0404 21:44:46.670485   21531 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-454952 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0404 21:44:46.670568   21531 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0404 21:44:46.670647   21531 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0404 21:44:46.670711   21531 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0404 21:44:46.670783   21531 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 21:44:46.670856   21531 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 21:44:46.670938   21531 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 21:44:46.671013   21531 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 21:44:46.671182   21531 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 21:44:46.671272   21531 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 21:44:46.671392   21531 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 21:44:46.671493   21531 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 21:44:46.673545   21531 out.go:204]   - Booting up control plane ...
	I0404 21:44:46.673639   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 21:44:46.673722   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 21:44:46.673816   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 21:44:46.673934   21531 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 21:44:46.674051   21531 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 21:44:46.674096   21531 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 21:44:46.674325   21531 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 21:44:46.674425   21531 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.621425 seconds
	I0404 21:44:46.674537   21531 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 21:44:46.674714   21531 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 21:44:46.674816   21531 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 21:44:46.675058   21531 kubeadm.go:309] [mark-control-plane] Marking the node ha-454952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 21:44:46.675118   21531 kubeadm.go:309] [bootstrap-token] Using token: ya8q6p.186cu33hp9v28qqx
	I0404 21:44:46.676247   21531 out.go:204]   - Configuring RBAC rules ...
	I0404 21:44:46.676368   21531 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 21:44:46.676473   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 21:44:46.676646   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 21:44:46.676803   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 21:44:46.676909   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 21:44:46.677028   21531 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 21:44:46.677139   21531 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 21:44:46.677190   21531 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 21:44:46.677232   21531 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 21:44:46.677239   21531 kubeadm.go:309] 
	I0404 21:44:46.677286   21531 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 21:44:46.677292   21531 kubeadm.go:309] 
	I0404 21:44:46.677367   21531 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 21:44:46.677377   21531 kubeadm.go:309] 
	I0404 21:44:46.677398   21531 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 21:44:46.677448   21531 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 21:44:46.677492   21531 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 21:44:46.677503   21531 kubeadm.go:309] 
	I0404 21:44:46.677566   21531 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 21:44:46.677577   21531 kubeadm.go:309] 
	I0404 21:44:46.677631   21531 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 21:44:46.677638   21531 kubeadm.go:309] 
	I0404 21:44:46.677707   21531 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 21:44:46.677819   21531 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 21:44:46.677917   21531 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 21:44:46.677936   21531 kubeadm.go:309] 
	I0404 21:44:46.678032   21531 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 21:44:46.678134   21531 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 21:44:46.678144   21531 kubeadm.go:309] 
	I0404 21:44:46.678235   21531 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ya8q6p.186cu33hp9v28qqx \
	I0404 21:44:46.678334   21531 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 21:44:46.678355   21531 kubeadm.go:309] 	--control-plane 
	I0404 21:44:46.678364   21531 kubeadm.go:309] 
	I0404 21:44:46.678446   21531 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 21:44:46.678455   21531 kubeadm.go:309] 
	I0404 21:44:46.678554   21531 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ya8q6p.186cu33hp9v28qqx \
	I0404 21:44:46.678698   21531 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 21:44:46.678710   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:46.678717   21531 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0404 21:44:46.680306   21531 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0404 21:44:46.681691   21531 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0404 21:44:46.701404   21531 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0404 21:44:46.701421   21531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0404 21:44:46.761476   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0404 21:44:47.161763   21531 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 21:44:47.161842   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:47.161899   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952 minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=true
	I0404 21:44:47.186182   21531 ops.go:34] apiserver oom_adj: -16
	I0404 21:44:47.319261   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:47.819595   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:48.320189   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:48.819327   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:49.319704   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:49.819463   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:50.320026   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:50.819391   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:51.320092   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:51.819953   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:52.319560   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:52.819983   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:53.320054   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:53.820167   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:54.320322   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:54.819637   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:55.320325   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:55.820153   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:56.319602   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:56.820208   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:57.319911   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:57.820284   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.319665   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.819575   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.947107   21531 kubeadm.go:1107] duration metric: took 11.785322233s to wait for elevateKubeSystemPrivileges
	W0404 21:44:58.947153   21531 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 21:44:58.947161   21531 kubeadm.go:393] duration metric: took 23.858536385s to StartCluster
	I0404 21:44:58.947176   21531 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:58.947256   21531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:58.947885   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:58.948108   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0404 21:44:58.948112   21531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:44:58.948221   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:44:58.948208   21531 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 21:44:58.948307   21531 addons.go:69] Setting storage-provisioner=true in profile "ha-454952"
	I0404 21:44:58.948331   21531 addons.go:69] Setting default-storageclass=true in profile "ha-454952"
	I0404 21:44:58.948369   21531 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-454952"
	I0404 21:44:58.948332   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:44:58.948340   21531 addons.go:234] Setting addon storage-provisioner=true in "ha-454952"
	I0404 21:44:58.948515   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:44:58.948729   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.948783   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.948901   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.948930   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.964231   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0404 21:44:58.964253   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0404 21:44:58.964663   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.964666   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.965156   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.965174   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.965313   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.965326   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.965551   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.965660   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.965852   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:58.966116   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.966163   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.967828   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:58.968082   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0404 21:44:58.968559   21531 cert_rotation.go:137] Starting client certificate rotation controller
	I0404 21:44:58.968655   21531 addons.go:234] Setting addon default-storageclass=true in "ha-454952"
	I0404 21:44:58.968703   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:44:58.968954   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.968996   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.982282   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0404 21:44:58.982824   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.983345   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.983373   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.983666   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0404 21:44:58.983760   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.983932   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:58.984177   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.984682   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.984704   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.985051   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.985543   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.985564   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.985708   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:58.987846   21531 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 21:44:58.989702   21531 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:44:58.989725   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 21:44:58.989745   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:58.993077   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:58.993503   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:58.993536   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:58.993716   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:58.993907   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:58.994082   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:58.994247   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:59.001411   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0404 21:44:59.001821   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:59.002254   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:59.002277   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:59.002574   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:59.002769   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:59.004536   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:59.004826   21531 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 21:44:59.004843   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 21:44:59.004860   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:59.007454   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:59.007846   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:59.007873   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:59.007997   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:59.008163   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:59.008303   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:59.008456   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:59.261455   21531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 21:44:59.273206   21531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:44:59.292694   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0404 21:44:59.678008   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.678036   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.678529   21531 main.go:141] libmachine: (ha-454952) DBG | Closing plugin on server side
	I0404 21:44:59.678531   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.678550   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:44:59.678559   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.678573   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.678833   21531 main.go:141] libmachine: (ha-454952) DBG | Closing plugin on server side
	I0404 21:44:59.678865   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.678880   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:44:59.679028   21531 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0404 21:44:59.679039   21531 round_trippers.go:469] Request Headers:
	I0404 21:44:59.679050   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:44:59.679058   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:44:59.690493   21531 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0404 21:44:59.691028   21531 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0404 21:44:59.691046   21531 round_trippers.go:469] Request Headers:
	I0404 21:44:59.691055   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:44:59.691061   21531 round_trippers.go:473]     Content-Type: application/json
	I0404 21:44:59.691065   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:44:59.695796   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:44:59.695975   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.695991   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.696288   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.696304   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.266307   21531 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0404 21:45:00.266540   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:45:00.266558   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:45:00.266863   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:45:00.266878   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.266887   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:45:00.266896   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:45:00.267117   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:45:00.267134   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.269215   21531 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0404 21:45:00.270610   21531 addons.go:505] duration metric: took 1.322405012s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0404 21:45:00.270652   21531 start.go:245] waiting for cluster config update ...
	I0404 21:45:00.270671   21531 start.go:254] writing updated cluster config ...
	I0404 21:45:00.272755   21531 out.go:177] 
	I0404 21:45:00.274535   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:00.274629   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:00.276821   21531 out.go:177] * Starting "ha-454952-m02" control-plane node in "ha-454952" cluster
	I0404 21:45:00.278381   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:45:00.278414   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:45:00.278519   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:45:00.278534   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:45:00.278636   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:00.278871   21531 start.go:360] acquireMachinesLock for ha-454952-m02: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:45:00.278932   21531 start.go:364] duration metric: took 35.093µs to acquireMachinesLock for "ha-454952-m02"
	I0404 21:45:00.278961   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:45:00.279049   21531 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0404 21:45:00.281049   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:45:00.281152   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:00.281186   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:00.300272   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0404 21:45:00.300765   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:00.301274   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:00.301300   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:00.301631   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:00.301871   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:00.302006   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:00.302148   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:45:00.302167   21531 client.go:168] LocalClient.Create starting
	I0404 21:45:00.302193   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:45:00.302224   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:45:00.302239   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:45:00.302301   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:45:00.302328   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:45:00.302346   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:45:00.302372   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:45:00.302388   21531 main.go:141] libmachine: (ha-454952-m02) Calling .PreCreateCheck
	I0404 21:45:00.302550   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:00.302938   21531 main.go:141] libmachine: Creating machine...
	I0404 21:45:00.302954   21531 main.go:141] libmachine: (ha-454952-m02) Calling .Create
	I0404 21:45:00.303078   21531 main.go:141] libmachine: (ha-454952-m02) Creating KVM machine...
	I0404 21:45:00.304163   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found existing default KVM network
	I0404 21:45:00.304281   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found existing private KVM network mk-ha-454952
	I0404 21:45:00.304509   21531 main.go:141] libmachine: (ha-454952-m02) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 ...
	I0404 21:45:00.304535   21531 main.go:141] libmachine: (ha-454952-m02) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:45:00.304576   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.304477   21881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:45:00.304686   21531 main.go:141] libmachine: (ha-454952-m02) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:45:00.523864   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.523736   21881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa...
	I0404 21:45:00.584744   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.584610   21881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/ha-454952-m02.rawdisk...
	I0404 21:45:00.584777   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Writing magic tar header
	I0404 21:45:00.584788   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Writing SSH key tar header
	I0404 21:45:00.584799   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.584730   21881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 ...
	I0404 21:45:00.584880   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02
	I0404 21:45:00.584917   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 (perms=drwx------)
	I0404 21:45:00.584947   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:45:00.584963   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:45:00.584978   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:45:00.584991   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:45:00.585005   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:45:00.585018   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:45:00.585030   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:45:00.585042   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:45:00.585057   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:45:00.585070   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:45:00.585081   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home
	I0404 21:45:00.585099   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Skipping /home - not owner
	I0404 21:45:00.585111   21531 main.go:141] libmachine: (ha-454952-m02) Creating domain...
	I0404 21:45:00.586000   21531 main.go:141] libmachine: (ha-454952-m02) define libvirt domain using xml: 
	I0404 21:45:00.586027   21531 main.go:141] libmachine: (ha-454952-m02) <domain type='kvm'>
	I0404 21:45:00.586038   21531 main.go:141] libmachine: (ha-454952-m02)   <name>ha-454952-m02</name>
	I0404 21:45:00.586048   21531 main.go:141] libmachine: (ha-454952-m02)   <memory unit='MiB'>2200</memory>
	I0404 21:45:00.586060   21531 main.go:141] libmachine: (ha-454952-m02)   <vcpu>2</vcpu>
	I0404 21:45:00.586069   21531 main.go:141] libmachine: (ha-454952-m02)   <features>
	I0404 21:45:00.586077   21531 main.go:141] libmachine: (ha-454952-m02)     <acpi/>
	I0404 21:45:00.586088   21531 main.go:141] libmachine: (ha-454952-m02)     <apic/>
	I0404 21:45:00.586097   21531 main.go:141] libmachine: (ha-454952-m02)     <pae/>
	I0404 21:45:00.586114   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586122   21531 main.go:141] libmachine: (ha-454952-m02)   </features>
	I0404 21:45:00.586128   21531 main.go:141] libmachine: (ha-454952-m02)   <cpu mode='host-passthrough'>
	I0404 21:45:00.586135   21531 main.go:141] libmachine: (ha-454952-m02)   
	I0404 21:45:00.586140   21531 main.go:141] libmachine: (ha-454952-m02)   </cpu>
	I0404 21:45:00.586151   21531 main.go:141] libmachine: (ha-454952-m02)   <os>
	I0404 21:45:00.586159   21531 main.go:141] libmachine: (ha-454952-m02)     <type>hvm</type>
	I0404 21:45:00.586172   21531 main.go:141] libmachine: (ha-454952-m02)     <boot dev='cdrom'/>
	I0404 21:45:00.586184   21531 main.go:141] libmachine: (ha-454952-m02)     <boot dev='hd'/>
	I0404 21:45:00.586199   21531 main.go:141] libmachine: (ha-454952-m02)     <bootmenu enable='no'/>
	I0404 21:45:00.586209   21531 main.go:141] libmachine: (ha-454952-m02)   </os>
	I0404 21:45:00.586216   21531 main.go:141] libmachine: (ha-454952-m02)   <devices>
	I0404 21:45:00.586227   21531 main.go:141] libmachine: (ha-454952-m02)     <disk type='file' device='cdrom'>
	I0404 21:45:00.586242   21531 main.go:141] libmachine: (ha-454952-m02)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/boot2docker.iso'/>
	I0404 21:45:00.586256   21531 main.go:141] libmachine: (ha-454952-m02)       <target dev='hdc' bus='scsi'/>
	I0404 21:45:00.586269   21531 main.go:141] libmachine: (ha-454952-m02)       <readonly/>
	I0404 21:45:00.586276   21531 main.go:141] libmachine: (ha-454952-m02)     </disk>
	I0404 21:45:00.586286   21531 main.go:141] libmachine: (ha-454952-m02)     <disk type='file' device='disk'>
	I0404 21:45:00.586295   21531 main.go:141] libmachine: (ha-454952-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:45:00.586309   21531 main.go:141] libmachine: (ha-454952-m02)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/ha-454952-m02.rawdisk'/>
	I0404 21:45:00.586324   21531 main.go:141] libmachine: (ha-454952-m02)       <target dev='hda' bus='virtio'/>
	I0404 21:45:00.586336   21531 main.go:141] libmachine: (ha-454952-m02)     </disk>
	I0404 21:45:00.586347   21531 main.go:141] libmachine: (ha-454952-m02)     <interface type='network'>
	I0404 21:45:00.586357   21531 main.go:141] libmachine: (ha-454952-m02)       <source network='mk-ha-454952'/>
	I0404 21:45:00.586366   21531 main.go:141] libmachine: (ha-454952-m02)       <model type='virtio'/>
	I0404 21:45:00.586372   21531 main.go:141] libmachine: (ha-454952-m02)     </interface>
	I0404 21:45:00.586383   21531 main.go:141] libmachine: (ha-454952-m02)     <interface type='network'>
	I0404 21:45:00.586409   21531 main.go:141] libmachine: (ha-454952-m02)       <source network='default'/>
	I0404 21:45:00.586434   21531 main.go:141] libmachine: (ha-454952-m02)       <model type='virtio'/>
	I0404 21:45:00.586440   21531 main.go:141] libmachine: (ha-454952-m02)     </interface>
	I0404 21:45:00.586445   21531 main.go:141] libmachine: (ha-454952-m02)     <serial type='pty'>
	I0404 21:45:00.586454   21531 main.go:141] libmachine: (ha-454952-m02)       <target port='0'/>
	I0404 21:45:00.586459   21531 main.go:141] libmachine: (ha-454952-m02)     </serial>
	I0404 21:45:00.586467   21531 main.go:141] libmachine: (ha-454952-m02)     <console type='pty'>
	I0404 21:45:00.586472   21531 main.go:141] libmachine: (ha-454952-m02)       <target type='serial' port='0'/>
	I0404 21:45:00.586482   21531 main.go:141] libmachine: (ha-454952-m02)     </console>
	I0404 21:45:00.586488   21531 main.go:141] libmachine: (ha-454952-m02)     <rng model='virtio'>
	I0404 21:45:00.586502   21531 main.go:141] libmachine: (ha-454952-m02)       <backend model='random'>/dev/random</backend>
	I0404 21:45:00.586506   21531 main.go:141] libmachine: (ha-454952-m02)     </rng>
	I0404 21:45:00.586512   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586518   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586523   21531 main.go:141] libmachine: (ha-454952-m02)   </devices>
	I0404 21:45:00.586530   21531 main.go:141] libmachine: (ha-454952-m02) </domain>
	I0404 21:45:00.586537   21531 main.go:141] libmachine: (ha-454952-m02) 
	I0404 21:45:00.593877   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:31:ab:5e in network default
	I0404 21:45:00.594406   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring networks are active...
	I0404 21:45:00.594436   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:00.595200   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring network default is active
	I0404 21:45:00.595569   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring network mk-ha-454952 is active
	I0404 21:45:00.595893   21531 main.go:141] libmachine: (ha-454952-m02) Getting domain xml...
	I0404 21:45:00.596623   21531 main.go:141] libmachine: (ha-454952-m02) Creating domain...
	I0404 21:45:01.877660   21531 main.go:141] libmachine: (ha-454952-m02) Waiting to get IP...
	I0404 21:45:01.878698   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:01.879348   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:01.879380   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:01.879298   21881 retry.go:31] will retry after 236.231853ms: waiting for machine to come up
	I0404 21:45:02.116876   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.117407   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.117443   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.117367   21881 retry.go:31] will retry after 269.603826ms: waiting for machine to come up
	I0404 21:45:02.388837   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.389285   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.389332   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.389269   21881 retry.go:31] will retry after 383.378459ms: waiting for machine to come up
	I0404 21:45:02.773722   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.774204   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.774253   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.774161   21881 retry.go:31] will retry after 505.464099ms: waiting for machine to come up
	I0404 21:45:03.281604   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:03.282114   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:03.282161   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:03.282049   21881 retry.go:31] will retry after 616.997067ms: waiting for machine to come up
	I0404 21:45:03.900883   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:03.901343   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:03.901380   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:03.901291   21881 retry.go:31] will retry after 877.843112ms: waiting for machine to come up
	I0404 21:45:04.780474   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:04.780847   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:04.780886   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:04.780811   21881 retry.go:31] will retry after 961.213944ms: waiting for machine to come up
	I0404 21:45:05.743296   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:05.743781   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:05.743810   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:05.743730   21881 retry.go:31] will retry after 982.805613ms: waiting for machine to come up
	I0404 21:45:06.727769   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:06.728425   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:06.728463   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:06.728379   21881 retry.go:31] will retry after 1.304521252s: waiting for machine to come up
	I0404 21:45:08.034126   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:08.034548   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:08.034574   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:08.034510   21881 retry.go:31] will retry after 1.73753848s: waiting for machine to come up
	I0404 21:45:09.773381   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:09.773993   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:09.774031   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:09.773950   21881 retry.go:31] will retry after 2.161610241s: waiting for machine to come up
	I0404 21:45:11.937792   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:11.938364   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:11.938389   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:11.938322   21881 retry.go:31] will retry after 3.446680064s: waiting for machine to come up
	I0404 21:45:15.386967   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:15.387421   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:15.387443   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:15.387365   21881 retry.go:31] will retry after 3.966828686s: waiting for machine to come up
	I0404 21:45:19.358507   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:19.358967   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:19.358988   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:19.358931   21881 retry.go:31] will retry after 4.138996074s: waiting for machine to come up
	I0404 21:45:23.501644   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.502178   21531 main.go:141] libmachine: (ha-454952-m02) Found IP for machine: 192.168.39.60
	I0404 21:45:23.502207   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has current primary IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.502216   21531 main.go:141] libmachine: (ha-454952-m02) Reserving static IP address...
	I0404 21:45:23.502614   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find host DHCP lease matching {name: "ha-454952-m02", mac: "52:54:00:0e:de:98", ip: "192.168.39.60"} in network mk-ha-454952
	I0404 21:45:23.579059   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Getting to WaitForSSH function...
	I0404 21:45:23.579087   21531 main.go:141] libmachine: (ha-454952-m02) Reserved static IP address: 192.168.39.60
	I0404 21:45:23.579125   21531 main.go:141] libmachine: (ha-454952-m02) Waiting for SSH to be available...
	I0404 21:45:23.581914   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.582282   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952
	I0404 21:45:23.582310   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find defined IP address of network mk-ha-454952 interface with MAC address 52:54:00:0e:de:98
	I0404 21:45:23.582468   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH client type: external
	I0404 21:45:23.582499   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa (-rw-------)
	I0404 21:45:23.582529   21531 main.go:141] libmachine: (ha-454952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:45:23.582543   21531 main.go:141] libmachine: (ha-454952-m02) DBG | About to run SSH command:
	I0404 21:45:23.582560   21531 main.go:141] libmachine: (ha-454952-m02) DBG | exit 0
	I0404 21:45:23.586935   21531 main.go:141] libmachine: (ha-454952-m02) DBG | SSH cmd err, output: exit status 255: 
	I0404 21:45:23.586958   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0404 21:45:23.586968   21531 main.go:141] libmachine: (ha-454952-m02) DBG | command : exit 0
	I0404 21:45:23.586975   21531 main.go:141] libmachine: (ha-454952-m02) DBG | err     : exit status 255
	I0404 21:45:23.587009   21531 main.go:141] libmachine: (ha-454952-m02) DBG | output  : 
	I0404 21:45:26.587489   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Getting to WaitForSSH function...
	I0404 21:45:26.590334   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.590710   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.590734   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.590919   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH client type: external
	I0404 21:45:26.590947   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa (-rw-------)
	I0404 21:45:26.590990   21531 main.go:141] libmachine: (ha-454952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:45:26.591006   21531 main.go:141] libmachine: (ha-454952-m02) DBG | About to run SSH command:
	I0404 21:45:26.591044   21531 main.go:141] libmachine: (ha-454952-m02) DBG | exit 0
	I0404 21:45:26.720957   21531 main.go:141] libmachine: (ha-454952-m02) DBG | SSH cmd err, output: <nil>: 
	I0404 21:45:26.721239   21531 main.go:141] libmachine: (ha-454952-m02) KVM machine creation complete!
	I0404 21:45:26.721562   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:26.722111   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:26.722318   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:26.722460   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:45:26.722476   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:45:26.723684   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:45:26.723697   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:45:26.723703   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:45:26.723708   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.725754   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.726161   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.726182   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.726335   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.726553   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.726766   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.726951   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.727140   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.727343   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.727355   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:45:26.836708   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:45:26.836734   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:45:26.836744   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.839938   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.840332   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.840361   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.840569   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.840783   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.840943   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.841059   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.841253   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.841476   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.841495   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:45:26.953111   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:45:26.953181   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:45:26.953192   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:45:26.953204   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:26.953474   21531 buildroot.go:166] provisioning hostname "ha-454952-m02"
	I0404 21:45:26.953502   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:26.953659   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.956549   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.956908   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.956937   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.957079   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.957236   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.957390   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.957532   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.957687   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.957867   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.957892   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952-m02 && echo "ha-454952-m02" | sudo tee /etc/hostname
	I0404 21:45:27.083989   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952-m02
	
	I0404 21:45:27.084014   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.086982   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.087393   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.087424   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.087609   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.087793   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.087937   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.088043   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.088286   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:27.088452   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:27.088469   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:45:27.206028   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:45:27.206055   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:45:27.206074   21531 buildroot.go:174] setting up certificates
	I0404 21:45:27.206086   21531 provision.go:84] configureAuth start
	I0404 21:45:27.206096   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:27.206369   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:27.208940   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.209285   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.209319   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.209470   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.211924   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.212290   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.212318   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.212425   21531 provision.go:143] copyHostCerts
	I0404 21:45:27.212472   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:45:27.212511   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:45:27.212523   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:45:27.212612   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:45:27.212702   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:45:27.212728   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:45:27.212736   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:45:27.212774   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:45:27.212834   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:45:27.212858   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:45:27.212874   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:45:27.212910   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:45:27.212993   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952-m02 san=[127.0.0.1 192.168.39.60 ha-454952-m02 localhost minikube]
	I0404 21:45:27.444142   21531 provision.go:177] copyRemoteCerts
	I0404 21:45:27.444192   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:45:27.444216   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.447017   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.447404   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.447433   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.447591   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.447809   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.448004   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.448148   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:27.537079   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:45:27.537138   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:45:27.564140   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:45:27.564219   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0404 21:45:27.591891   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:45:27.591959   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:45:27.618967   21531 provision.go:87] duration metric: took 412.871453ms to configureAuth
	I0404 21:45:27.618995   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:45:27.619165   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:27.619229   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.622532   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.622976   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.623008   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.623143   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.623365   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.623535   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.623667   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.623824   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:27.623983   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:27.623997   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:45:27.928735   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:45:27.928789   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:45:27.928798   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetURL
	I0404 21:45:27.930111   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using libvirt version 6000000
	I0404 21:45:27.932772   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.933200   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.933231   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.933409   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:45:27.933424   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:45:27.933439   21531 client.go:171] duration metric: took 27.631265815s to LocalClient.Create
	I0404 21:45:27.933461   21531 start.go:167] duration metric: took 27.631314558s to libmachine.API.Create "ha-454952"
	I0404 21:45:27.933470   21531 start.go:293] postStartSetup for "ha-454952-m02" (driver="kvm2")
	I0404 21:45:27.933480   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:45:27.933499   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:27.933704   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:45:27.933724   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.936189   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.936512   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.936541   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.936669   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.936876   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.937042   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.937234   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.023309   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:45:28.027805   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:45:28.027836   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:45:28.027903   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:45:28.027969   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:45:28.027980   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:45:28.028088   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:45:28.038297   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:45:28.063041   21531 start.go:296] duration metric: took 129.558479ms for postStartSetup
	I0404 21:45:28.063098   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:28.063738   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:28.066667   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.067100   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.067124   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.067352   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:28.067582   21531 start.go:128] duration metric: took 27.788519902s to createHost
	I0404 21:45:28.067612   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:28.071313   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.071654   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.071688   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.071814   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.072005   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.072209   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.072354   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.072502   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:28.072691   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:28.072701   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:45:28.185571   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267128.163702316
	
	I0404 21:45:28.185598   21531 fix.go:216] guest clock: 1712267128.163702316
	I0404 21:45:28.185608   21531 fix.go:229] Guest: 2024-04-04 21:45:28.163702316 +0000 UTC Remote: 2024-04-04 21:45:28.067598122 +0000 UTC m=+85.464231324 (delta=96.104194ms)
	I0404 21:45:28.185633   21531 fix.go:200] guest clock delta is within tolerance: 96.104194ms
	I0404 21:45:28.185639   21531 start.go:83] releasing machines lock for "ha-454952-m02", held for 27.906690079s
	I0404 21:45:28.185663   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.185952   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:28.188559   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.188890   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.188919   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.191416   21531 out.go:177] * Found network options:
	I0404 21:45:28.192897   21531 out.go:177]   - NO_PROXY=192.168.39.13
	W0404 21:45:28.194105   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:45:28.194140   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.194757   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.194929   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.195009   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:45:28.195049   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	W0404 21:45:28.195155   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:45:28.195239   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:45:28.195259   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:28.197662   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198021   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198073   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.198091   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198296   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.198423   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.198453   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.198452   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198717   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.198726   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.198949   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.198967   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.199166   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.199327   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.433038   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:45:28.439811   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:45:28.439886   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:45:28.457393   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:45:28.457423   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:45:28.457490   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:45:28.474546   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:45:28.489787   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:45:28.489847   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:45:28.503963   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:45:28.518290   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:45:28.637383   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:45:28.788758   21531 docker.go:233] disabling docker service ...
	I0404 21:45:28.788826   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:45:28.805511   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:45:28.819427   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:45:28.959689   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:45:29.103883   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:45:29.118755   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:45:29.139139   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:45:29.139213   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.150656   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:45:29.150730   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.162665   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.175117   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.187243   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:45:29.199827   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.212464   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.233434   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.245487   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:45:29.256575   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:45:29.256640   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:45:29.272739   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:45:29.284733   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:45:29.413393   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:45:29.560029   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:45:29.560102   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:45:29.565394   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:45:29.565444   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:45:29.570093   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:45:29.609360   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:45:29.609434   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:45:29.641317   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:45:29.672765   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:45:29.674396   21531 out.go:177]   - env NO_PROXY=192.168.39.13
	I0404 21:45:29.675983   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:29.678787   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:29.679137   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:29.679157   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:29.679434   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:45:29.683924   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:45:29.698250   21531 mustload.go:65] Loading cluster: ha-454952
	I0404 21:45:29.698463   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:29.698722   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:29.698754   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:29.714030   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0404 21:45:29.714397   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:29.714808   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:29.714824   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:29.715195   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:29.715375   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:45:29.716904   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:45:29.717311   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:29.717342   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:29.731650   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0404 21:45:29.732057   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:29.732518   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:29.732541   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:29.732922   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:29.733111   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:45:29.733311   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.60
	I0404 21:45:29.733323   21531 certs.go:194] generating shared ca certs ...
	I0404 21:45:29.733339   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.733478   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:45:29.733530   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:45:29.733543   21531 certs.go:256] generating profile certs ...
	I0404 21:45:29.733715   21531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:45:29.733751   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f
	I0404 21:45:29.733772   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.254]
	I0404 21:45:29.807683   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f ...
	I0404 21:45:29.807716   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f: {Name:mkd103717d1c351620973f640a9417354542e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.807906   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f ...
	I0404 21:45:29.807924   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f: {Name:mk07c5ec9d008651c2ca286887884086db0afe24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.808022   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:45:29.808212   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:45:29.808396   21531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:45:29.808414   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:45:29.808431   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:45:29.808450   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:45:29.808468   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:45:29.808493   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:45:29.808510   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:45:29.808524   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:45:29.808542   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:45:29.808624   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:45:29.808665   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:45:29.808678   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:45:29.808708   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:45:29.808739   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:45:29.808770   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:45:29.808837   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:45:29.808877   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:45:29.808896   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:29.808913   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:45:29.808950   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:45:29.812039   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:29.812452   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:45:29.812472   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:29.812658   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:45:29.812831   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:45:29.812989   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:45:29.813160   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:45:29.892540   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0404 21:45:29.898217   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0404 21:45:29.911157   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0404 21:45:29.915892   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0404 21:45:29.928090   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0404 21:45:29.932403   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0404 21:45:29.943834   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0404 21:45:29.948036   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0404 21:45:29.960525   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0404 21:45:29.965239   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0404 21:45:29.981031   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0404 21:45:29.985580   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0404 21:45:29.997512   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:45:30.024317   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:45:30.051187   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:45:30.077854   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:45:30.105971   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0404 21:45:30.131831   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 21:45:30.157884   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:45:30.183114   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:45:30.211074   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:45:30.237872   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:45:30.265115   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:45:30.292810   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0404 21:45:30.314525   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0404 21:45:30.332072   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0404 21:45:30.349494   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0404 21:45:30.368701   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0404 21:45:30.387574   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0404 21:45:30.405763   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0404 21:45:30.423168   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:45:30.429038   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:45:30.441069   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.446531   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.446592   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.452986   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:45:30.465883   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:45:30.477901   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.482627   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.482682   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.489021   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:45:30.502287   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:45:30.515896   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.520543   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.520605   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.526429   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:45:30.538115   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:45:30.542417   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:45:30.542475   21531 kubeadm.go:928] updating node {m02 192.168.39.60 8443 v1.29.3 crio true true} ...
	I0404 21:45:30.542554   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:45:30.542578   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:45:30.542611   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:45:30.561396   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:45:30.561537   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:45:30.561595   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:45:30.573506   21531 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0404 21:45:30.573557   21531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0404 21:45:30.584050   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0404 21:45:30.584083   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:45:30.584153   21531 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0404 21:45:30.584167   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:45:30.584191   21531 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0404 21:45:30.588823   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0404 21:45:30.588855   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0404 21:45:56.438742   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:45:56.454469   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:45:56.454568   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:45:56.458893   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0404 21:45:56.458926   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0404 21:45:58.191023   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:45:58.191110   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:45:58.196342   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0404 21:45:58.196372   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0404 21:45:58.450605   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0404 21:45:58.460793   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0404 21:45:58.478720   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:45:58.497698   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:45:58.515695   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:45:58.519999   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:45:58.533166   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:45:58.664897   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:45:58.682498   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:45:58.682825   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:58.682860   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:58.698067   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0404 21:45:58.698482   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:58.699051   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:58.699078   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:58.699411   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:58.699647   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:45:58.699821   21531 start.go:316] joinCluster: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:45:58.699914   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0404 21:45:58.699929   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:45:58.702998   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:58.703459   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:45:58.703488   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:58.703633   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:45:58.703805   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:45:58.703972   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:45:58.704105   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:45:58.887846   21531 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:45:58.887889   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2defvu.xmfc923okok4qteb --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m02 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443"
	I0404 21:46:23.956199   21531 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2defvu.xmfc923okok4qteb --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m02 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443": (25.068283341s)
	I0404 21:46:23.956235   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0404 21:46:24.469532   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952-m02 minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=false
	I0404 21:46:24.622440   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-454952-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0404 21:46:24.729960   21531 start.go:318] duration metric: took 26.030136183s to joinCluster
	I0404 21:46:24.730023   21531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:46:24.731971   21531 out.go:177] * Verifying Kubernetes components...
	I0404 21:46:24.730302   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:46:24.733336   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:46:24.925708   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:46:24.989603   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:46:24.989909   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0404 21:46:24.989985   21531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.13:8443
	I0404 21:46:24.990268   21531 node_ready.go:35] waiting up to 6m0s for node "ha-454952-m02" to be "Ready" ...
	I0404 21:46:24.990356   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:24.990367   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:24.990377   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:24.990386   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.002065   21531 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0404 21:46:25.490882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:25.490901   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:25.490909   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:25.490915   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.494631   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:25.990628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:25.990654   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:25.990666   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:25.990679   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.993916   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.491434   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:26.491458   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:26.491469   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:26.491475   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:26.495319   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.990465   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:26.990487   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:26.990495   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:26.990499   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:26.994146   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.995086   21531 node_ready.go:53] node "ha-454952-m02" has status "Ready":"False"
	I0404 21:46:27.491398   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:27.491421   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:27.491458   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:27.491464   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:27.494894   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:27.991073   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:27.991098   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:27.991107   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:27.991116   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:27.995056   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:28.491179   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:28.491207   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:28.491218   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:28.491226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:28.495320   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:28.991235   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:28.991257   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:28.991266   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:28.991273   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:28.995835   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:28.996482   21531 node_ready.go:53] node "ha-454952-m02" has status "Ready":"False"
	I0404 21:46:29.490887   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:29.490908   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:29.490914   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:29.490917   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:29.494469   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:29.991300   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:29.991335   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:29.991342   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:29.991346   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:29.994389   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.491083   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:30.491102   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.491110   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.491113   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.494483   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.495199   21531 node_ready.go:49] node "ha-454952-m02" has status "Ready":"True"
	I0404 21:46:30.495227   21531 node_ready.go:38] duration metric: took 5.504929948s for node "ha-454952-m02" to be "Ready" ...
	I0404 21:46:30.495236   21531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:46:30.495373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:30.495385   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.495392   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.495396   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.500629   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:46:30.506720   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.506809   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-9qsz7
	I0404 21:46:30.506822   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.506831   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.506838   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.510005   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.510750   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.510775   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.510781   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.510785   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.513908   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.514394   21531 pod_ready.go:92] pod "coredns-76f75df574-9qsz7" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.514413   21531 pod_ready.go:81] duration metric: took 7.670219ms for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.514423   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.514473   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hsdfw
	I0404 21:46:30.514480   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.514487   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.514492   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.517301   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.517882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.517898   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.517905   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.517910   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.520578   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.521155   21531 pod_ready.go:92] pod "coredns-76f75df574-hsdfw" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.521172   21531 pod_ready.go:81] duration metric: took 6.743286ms for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.521181   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.521239   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952
	I0404 21:46:30.521249   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.521256   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.521260   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.524258   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.525102   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.525124   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.525131   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.525137   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.528292   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.529146   21531 pod_ready.go:92] pod "etcd-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.529166   21531 pod_ready.go:81] duration metric: took 7.977704ms for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.529175   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.529263   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:30.529276   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.529283   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.529287   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.532091   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.532889   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:30.532905   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.532915   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.532918   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.535402   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:31.029639   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:31.029662   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.029670   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.029673   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.033490   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.034087   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:31.034103   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.034111   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.034115   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.037298   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.529424   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:31.529444   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.529450   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.529454   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.533195   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.534076   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:31.534098   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.534108   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.534117   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.537925   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.029843   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:32.029869   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.029878   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.029881   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.033777   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.034534   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:32.034547   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.034553   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.034559   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.037396   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:32.530229   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:32.530267   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.530275   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.530279   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.534214   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.535354   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:32.535372   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.535379   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.535382   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.538606   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.539304   21531 pod_ready.go:102] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"False"
	I0404 21:46:33.029394   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:33.029425   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.029433   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.029437   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.033398   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:33.034003   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:33.034019   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.034028   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.034034   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.037004   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:33.530224   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:33.530253   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.530262   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.530272   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.533909   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:33.534823   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:33.534840   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.534847   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.534851   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.537652   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:34.030372   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:34.030393   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.030401   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.030405   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.039930   21531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0404 21:46:34.041397   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:34.041417   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.041428   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.041431   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.045249   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:34.529378   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:34.529417   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.529424   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.529428   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.533374   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:34.534202   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:34.534218   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.534225   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.534229   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.537317   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:35.029373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:35.029397   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.029405   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.029410   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.033450   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.034195   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.034208   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.034215   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.034220   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.037228   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.037748   21531 pod_ready.go:102] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"False"
	I0404 21:46:35.529695   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:35.529714   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.529721   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.529725   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.533941   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.534944   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.534963   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.534974   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.534978   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.537794   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.538543   21531 pod_ready.go:92] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.538561   21531 pod_ready.go:81] duration metric: took 5.009380502s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.538575   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.538628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952
	I0404 21:46:35.538636   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.538642   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.538646   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.541590   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.542255   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.542274   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.542285   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.542292   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.544857   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.545522   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.545545   21531 pod_ready.go:81] duration metric: took 6.963641ms for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.545558   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.545628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m02
	I0404 21:46:35.545637   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.545645   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.545652   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.548205   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.548881   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.548895   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.548901   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.548904   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.551179   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.551729   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.551746   21531 pod_ready.go:81] duration metric: took 6.180806ms for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.551755   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.551803   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:46:35.551811   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.551818   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.551820   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.554254   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.691230   21531 request.go:629] Waited for 136.263257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.691286   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.691292   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.691311   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.691321   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.696097   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.697549   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.697577   21531 pod_ready.go:81] duration metric: took 145.814687ms for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.697593   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.892070   21531 request.go:629] Waited for 194.408263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:46:35.892178   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:46:35.892189   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.892197   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.892203   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.895814   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.091210   21531 request.go:629] Waited for 194.316591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.091276   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.091282   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.091289   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.091292   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.094834   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.095670   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.095693   21531 pod_ready.go:81] duration metric: took 398.091423ms for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.095705   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.291768   21531 request.go:629] Waited for 195.980439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:46:36.291834   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:46:36.291856   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.291864   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.291867   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.295259   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.491226   21531 request.go:629] Waited for 195.287616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.491325   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.491339   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.491346   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.491350   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.494357   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:36.494867   21531 pod_ready.go:92] pod "kube-proxy-6nkxm" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.494883   21531 pod_ready.go:81] duration metric: took 399.17144ms for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.494893   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.692033   21531 request.go:629] Waited for 197.066541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:46:36.692108   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:46:36.692113   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.692133   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.692138   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.695596   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.891960   21531 request.go:629] Waited for 195.407458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:36.892024   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:36.892032   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.892042   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.892054   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.898107   21531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0404 21:46:36.898810   21531 pod_ready.go:92] pod "kube-proxy-gjvm9" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.898829   21531 pod_ready.go:81] duration metric: took 403.928463ms for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.898841   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.091944   21531 request.go:629] Waited for 193.041942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:46:37.092009   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:46:37.092015   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.092022   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.092027   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.096064   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:37.291096   21531 request.go:629] Waited for 194.285325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:37.291170   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:37.291175   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.291183   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.291187   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.294221   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.294848   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:37.294886   21531 pod_ready.go:81] duration metric: took 396.037372ms for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.294899   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.491988   21531 request.go:629] Waited for 197.014907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:46:37.492058   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:46:37.492068   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.492076   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.492085   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.495596   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.691545   21531 request.go:629] Waited for 195.216161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:37.691627   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:37.691634   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.691645   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.691652   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.695020   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.695705   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:37.695724   21531 pod_ready.go:81] duration metric: took 400.817481ms for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.695734   21531 pod_ready.go:38] duration metric: took 7.200463659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:46:37.695748   21531 api_server.go:52] waiting for apiserver process to appear ...
	I0404 21:46:37.695799   21531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:46:37.711868   21531 api_server.go:72] duration metric: took 12.981814066s to wait for apiserver process to appear ...
	I0404 21:46:37.711900   21531 api_server.go:88] waiting for apiserver healthz status ...
	I0404 21:46:37.711924   21531 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0404 21:46:37.717849   21531 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0404 21:46:37.717911   21531 round_trippers.go:463] GET https://192.168.39.13:8443/version
	I0404 21:46:37.717917   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.717924   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.717928   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.718819   21531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0404 21:46:37.718958   21531 api_server.go:141] control plane version: v1.29.3
	I0404 21:46:37.718980   21531 api_server.go:131] duration metric: took 7.072339ms to wait for apiserver health ...
	I0404 21:46:37.718991   21531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 21:46:37.891584   21531 request.go:629] Waited for 172.519797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:37.891683   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:37.891694   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.891705   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.891714   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.900096   21531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0404 21:46:37.906669   21531 system_pods.go:59] 17 kube-system pods found
	I0404 21:46:37.906715   21531 system_pods.go:61] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:46:37.906724   21531 system_pods.go:61] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:46:37.906729   21531 system_pods.go:61] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:46:37.906733   21531 system_pods.go:61] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:46:37.906737   21531 system_pods.go:61] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:46:37.906741   21531 system_pods.go:61] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:46:37.906746   21531 system_pods.go:61] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:46:37.906751   21531 system_pods.go:61] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:46:37.906757   21531 system_pods.go:61] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:46:37.906762   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:46:37.906770   21531 system_pods.go:61] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:46:37.906776   21531 system_pods.go:61] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:46:37.906783   21531 system_pods.go:61] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:46:37.906789   21531 system_pods.go:61] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:46:37.906794   21531 system_pods.go:61] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:46:37.906799   21531 system_pods.go:61] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:46:37.906808   21531 system_pods.go:61] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:46:37.906817   21531 system_pods.go:74] duration metric: took 187.815542ms to wait for pod list to return data ...
	I0404 21:46:37.906831   21531 default_sa.go:34] waiting for default service account to be created ...
	I0404 21:46:38.091194   21531 request.go:629] Waited for 184.268682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:46:38.091273   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:46:38.091287   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.091298   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.091304   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.095221   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:38.095441   21531 default_sa.go:45] found service account: "default"
	I0404 21:46:38.095458   21531 default_sa.go:55] duration metric: took 188.620189ms for default service account to be created ...
	I0404 21:46:38.095468   21531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 21:46:38.291929   21531 request.go:629] Waited for 196.380448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:38.292006   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:38.292014   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.292024   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.292030   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.297802   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:46:38.302343   21531 system_pods.go:86] 17 kube-system pods found
	I0404 21:46:38.302372   21531 system_pods.go:89] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:46:38.302378   21531 system_pods.go:89] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:46:38.302383   21531 system_pods.go:89] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:46:38.302387   21531 system_pods.go:89] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:46:38.302391   21531 system_pods.go:89] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:46:38.302395   21531 system_pods.go:89] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:46:38.302398   21531 system_pods.go:89] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:46:38.302402   21531 system_pods.go:89] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:46:38.302407   21531 system_pods.go:89] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:46:38.302411   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:46:38.302415   21531 system_pods.go:89] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:46:38.302418   21531 system_pods.go:89] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:46:38.302422   21531 system_pods.go:89] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:46:38.302429   21531 system_pods.go:89] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:46:38.302433   21531 system_pods.go:89] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:46:38.302439   21531 system_pods.go:89] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:46:38.302443   21531 system_pods.go:89] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:46:38.302451   21531 system_pods.go:126] duration metric: took 206.976769ms to wait for k8s-apps to be running ...
	I0404 21:46:38.302461   21531 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 21:46:38.302504   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:46:38.319761   21531 system_svc.go:56] duration metric: took 17.288893ms WaitForService to wait for kubelet
	I0404 21:46:38.319805   21531 kubeadm.go:576] duration metric: took 13.58975508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:46:38.319828   21531 node_conditions.go:102] verifying NodePressure condition ...
	I0404 21:46:38.491192   21531 request.go:629] Waited for 171.296984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes
	I0404 21:46:38.491298   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes
	I0404 21:46:38.491309   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.491321   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.491328   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.494827   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:38.495717   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:46:38.495737   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:46:38.495749   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:46:38.495753   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:46:38.495757   21531 node_conditions.go:105] duration metric: took 175.923144ms to run NodePressure ...
	I0404 21:46:38.495767   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:46:38.495790   21531 start.go:254] writing updated cluster config ...
	I0404 21:46:38.497976   21531 out.go:177] 
	I0404 21:46:38.499618   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:46:38.499746   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:46:38.501674   21531 out.go:177] * Starting "ha-454952-m03" control-plane node in "ha-454952" cluster
	I0404 21:46:38.502950   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:46:38.502978   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:46:38.503087   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:46:38.503100   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:46:38.503204   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:46:38.503374   21531 start.go:360] acquireMachinesLock for ha-454952-m03: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:46:38.503417   21531 start.go:364] duration metric: took 23.763µs to acquireMachinesLock for "ha-454952-m03"
	I0404 21:46:38.503431   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:46:38.503520   21531 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0404 21:46:38.505236   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:46:38.505341   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:46:38.505385   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:46:38.522036   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0404 21:46:38.522433   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:46:38.522935   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:46:38.522955   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:46:38.523285   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:46:38.523515   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:46:38.523647   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:46:38.523785   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:46:38.523834   21531 client.go:168] LocalClient.Create starting
	I0404 21:46:38.523869   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:46:38.523903   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:46:38.523917   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:46:38.523969   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:46:38.523987   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:46:38.523998   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:46:38.524013   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:46:38.524021   21531 main.go:141] libmachine: (ha-454952-m03) Calling .PreCreateCheck
	I0404 21:46:38.524175   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:46:38.524522   21531 main.go:141] libmachine: Creating machine...
	I0404 21:46:38.524536   21531 main.go:141] libmachine: (ha-454952-m03) Calling .Create
	I0404 21:46:38.524669   21531 main.go:141] libmachine: (ha-454952-m03) Creating KVM machine...
	I0404 21:46:38.525942   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found existing default KVM network
	I0404 21:46:38.526083   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found existing private KVM network mk-ha-454952
	I0404 21:46:38.526218   21531 main.go:141] libmachine: (ha-454952-m03) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 ...
	I0404 21:46:38.526238   21531 main.go:141] libmachine: (ha-454952-m03) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:46:38.526258   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.526190   22299 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:46:38.526353   21531 main.go:141] libmachine: (ha-454952-m03) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:46:38.751166   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.751030   22299 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa...
	I0404 21:46:38.959700   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.959568   22299 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/ha-454952-m03.rawdisk...
	I0404 21:46:38.959728   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Writing magic tar header
	I0404 21:46:38.959739   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Writing SSH key tar header
	I0404 21:46:38.959751   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.959683   22299 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 ...
	I0404 21:46:38.959820   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03
	I0404 21:46:38.959856   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 (perms=drwx------)
	I0404 21:46:38.959865   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:46:38.959873   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:46:38.959884   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:46:38.959893   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:46:38.959915   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:46:38.959934   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:46:38.959944   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:46:38.959952   21531 main.go:141] libmachine: (ha-454952-m03) Creating domain...
	I0404 21:46:38.959998   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:46:38.960023   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:46:38.960034   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:46:38.960046   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home
	I0404 21:46:38.960062   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Skipping /home - not owner
	I0404 21:46:38.960996   21531 main.go:141] libmachine: (ha-454952-m03) define libvirt domain using xml: 
	I0404 21:46:38.961025   21531 main.go:141] libmachine: (ha-454952-m03) <domain type='kvm'>
	I0404 21:46:38.961033   21531 main.go:141] libmachine: (ha-454952-m03)   <name>ha-454952-m03</name>
	I0404 21:46:38.961040   21531 main.go:141] libmachine: (ha-454952-m03)   <memory unit='MiB'>2200</memory>
	I0404 21:46:38.961045   21531 main.go:141] libmachine: (ha-454952-m03)   <vcpu>2</vcpu>
	I0404 21:46:38.961050   21531 main.go:141] libmachine: (ha-454952-m03)   <features>
	I0404 21:46:38.961057   21531 main.go:141] libmachine: (ha-454952-m03)     <acpi/>
	I0404 21:46:38.961063   21531 main.go:141] libmachine: (ha-454952-m03)     <apic/>
	I0404 21:46:38.961070   21531 main.go:141] libmachine: (ha-454952-m03)     <pae/>
	I0404 21:46:38.961077   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961084   21531 main.go:141] libmachine: (ha-454952-m03)   </features>
	I0404 21:46:38.961094   21531 main.go:141] libmachine: (ha-454952-m03)   <cpu mode='host-passthrough'>
	I0404 21:46:38.961106   21531 main.go:141] libmachine: (ha-454952-m03)   
	I0404 21:46:38.961114   21531 main.go:141] libmachine: (ha-454952-m03)   </cpu>
	I0404 21:46:38.961141   21531 main.go:141] libmachine: (ha-454952-m03)   <os>
	I0404 21:46:38.961166   21531 main.go:141] libmachine: (ha-454952-m03)     <type>hvm</type>
	I0404 21:46:38.961176   21531 main.go:141] libmachine: (ha-454952-m03)     <boot dev='cdrom'/>
	I0404 21:46:38.961189   21531 main.go:141] libmachine: (ha-454952-m03)     <boot dev='hd'/>
	I0404 21:46:38.961199   21531 main.go:141] libmachine: (ha-454952-m03)     <bootmenu enable='no'/>
	I0404 21:46:38.961209   21531 main.go:141] libmachine: (ha-454952-m03)   </os>
	I0404 21:46:38.961217   21531 main.go:141] libmachine: (ha-454952-m03)   <devices>
	I0404 21:46:38.961229   21531 main.go:141] libmachine: (ha-454952-m03)     <disk type='file' device='cdrom'>
	I0404 21:46:38.961248   21531 main.go:141] libmachine: (ha-454952-m03)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/boot2docker.iso'/>
	I0404 21:46:38.961261   21531 main.go:141] libmachine: (ha-454952-m03)       <target dev='hdc' bus='scsi'/>
	I0404 21:46:38.961300   21531 main.go:141] libmachine: (ha-454952-m03)       <readonly/>
	I0404 21:46:38.961338   21531 main.go:141] libmachine: (ha-454952-m03)     </disk>
	I0404 21:46:38.961355   21531 main.go:141] libmachine: (ha-454952-m03)     <disk type='file' device='disk'>
	I0404 21:46:38.961370   21531 main.go:141] libmachine: (ha-454952-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:46:38.961408   21531 main.go:141] libmachine: (ha-454952-m03)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/ha-454952-m03.rawdisk'/>
	I0404 21:46:38.961432   21531 main.go:141] libmachine: (ha-454952-m03)       <target dev='hda' bus='virtio'/>
	I0404 21:46:38.961443   21531 main.go:141] libmachine: (ha-454952-m03)     </disk>
	I0404 21:46:38.961451   21531 main.go:141] libmachine: (ha-454952-m03)     <interface type='network'>
	I0404 21:46:38.961463   21531 main.go:141] libmachine: (ha-454952-m03)       <source network='mk-ha-454952'/>
	I0404 21:46:38.961475   21531 main.go:141] libmachine: (ha-454952-m03)       <model type='virtio'/>
	I0404 21:46:38.961487   21531 main.go:141] libmachine: (ha-454952-m03)     </interface>
	I0404 21:46:38.961499   21531 main.go:141] libmachine: (ha-454952-m03)     <interface type='network'>
	I0404 21:46:38.961510   21531 main.go:141] libmachine: (ha-454952-m03)       <source network='default'/>
	I0404 21:46:38.961528   21531 main.go:141] libmachine: (ha-454952-m03)       <model type='virtio'/>
	I0404 21:46:38.961546   21531 main.go:141] libmachine: (ha-454952-m03)     </interface>
	I0404 21:46:38.961563   21531 main.go:141] libmachine: (ha-454952-m03)     <serial type='pty'>
	I0404 21:46:38.961574   21531 main.go:141] libmachine: (ha-454952-m03)       <target port='0'/>
	I0404 21:46:38.961585   21531 main.go:141] libmachine: (ha-454952-m03)     </serial>
	I0404 21:46:38.961595   21531 main.go:141] libmachine: (ha-454952-m03)     <console type='pty'>
	I0404 21:46:38.961607   21531 main.go:141] libmachine: (ha-454952-m03)       <target type='serial' port='0'/>
	I0404 21:46:38.961622   21531 main.go:141] libmachine: (ha-454952-m03)     </console>
	I0404 21:46:38.961638   21531 main.go:141] libmachine: (ha-454952-m03)     <rng model='virtio'>
	I0404 21:46:38.961650   21531 main.go:141] libmachine: (ha-454952-m03)       <backend model='random'>/dev/random</backend>
	I0404 21:46:38.961660   21531 main.go:141] libmachine: (ha-454952-m03)     </rng>
	I0404 21:46:38.961670   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961682   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961697   21531 main.go:141] libmachine: (ha-454952-m03)   </devices>
	I0404 21:46:38.961710   21531 main.go:141] libmachine: (ha-454952-m03) </domain>
	I0404 21:46:38.961720   21531 main.go:141] libmachine: (ha-454952-m03) 
	I0404 21:46:38.968849   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:10:41:55 in network default
	I0404 21:46:38.969511   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring networks are active...
	I0404 21:46:38.969545   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:38.970384   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring network default is active
	I0404 21:46:38.970739   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring network mk-ha-454952 is active
	I0404 21:46:38.971188   21531 main.go:141] libmachine: (ha-454952-m03) Getting domain xml...
	I0404 21:46:38.971925   21531 main.go:141] libmachine: (ha-454952-m03) Creating domain...
	I0404 21:46:40.197829   21531 main.go:141] libmachine: (ha-454952-m03) Waiting to get IP...
	I0404 21:46:40.198601   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.199014   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.199054   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.198993   22299 retry.go:31] will retry after 264.293345ms: waiting for machine to come up
	I0404 21:46:40.464550   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.464998   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.465026   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.464962   22299 retry.go:31] will retry after 277.153815ms: waiting for machine to come up
	I0404 21:46:40.743411   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.743942   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.743969   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.743888   22299 retry.go:31] will retry after 302.772126ms: waiting for machine to come up
	I0404 21:46:41.048485   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:41.048967   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:41.048994   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:41.048916   22299 retry.go:31] will retry after 554.26818ms: waiting for machine to come up
	I0404 21:46:41.604852   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:41.605279   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:41.605307   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:41.605243   22299 retry.go:31] will retry after 593.569938ms: waiting for machine to come up
	I0404 21:46:42.199905   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:42.200439   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:42.200468   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:42.200400   22299 retry.go:31] will retry after 781.69482ms: waiting for machine to come up
	I0404 21:46:42.983490   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:42.983956   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:42.983983   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:42.983919   22299 retry.go:31] will retry after 999.658039ms: waiting for machine to come up
	I0404 21:46:43.985049   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:43.985669   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:43.985699   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:43.985624   22299 retry.go:31] will retry after 1.386933992s: waiting for machine to come up
	I0404 21:46:45.374475   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:45.374922   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:45.374959   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:45.374865   22299 retry.go:31] will retry after 1.790186863s: waiting for machine to come up
	I0404 21:46:47.167264   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:47.167792   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:47.167827   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:47.167749   22299 retry.go:31] will retry after 2.034077008s: waiting for machine to come up
	I0404 21:46:49.203112   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:49.203633   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:49.203662   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:49.203590   22299 retry.go:31] will retry after 2.285549921s: waiting for machine to come up
	I0404 21:46:51.491955   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:51.492431   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:51.492460   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:51.492366   22299 retry.go:31] will retry after 2.436406698s: waiting for machine to come up
	I0404 21:46:53.929897   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:53.930303   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:53.930330   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:53.930266   22299 retry.go:31] will retry after 4.105717474s: waiting for machine to come up
	I0404 21:46:58.038094   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:58.038630   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:58.038657   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:58.038586   22299 retry.go:31] will retry after 4.207781957s: waiting for machine to come up
	I0404 21:47:02.250815   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.251320   21531 main.go:141] libmachine: (ha-454952-m03) Found IP for machine: 192.168.39.217
	I0404 21:47:02.251340   21531 main.go:141] libmachine: (ha-454952-m03) Reserving static IP address...
	I0404 21:47:02.251353   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has current primary IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.251822   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find host DHCP lease matching {name: "ha-454952-m03", mac: "52:54:00:9a:12:2d", ip: "192.168.39.217"} in network mk-ha-454952
	I0404 21:47:02.327917   21531 main.go:141] libmachine: (ha-454952-m03) Reserved static IP address: 192.168.39.217
	I0404 21:47:02.327960   21531 main.go:141] libmachine: (ha-454952-m03) Waiting for SSH to be available...
	I0404 21:47:02.327971   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Getting to WaitForSSH function...
	I0404 21:47:02.330218   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.330589   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.330622   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.330775   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using SSH client type: external
	I0404 21:47:02.330809   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa (-rw-------)
	I0404 21:47:02.330839   21531 main.go:141] libmachine: (ha-454952-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:47:02.330851   21531 main.go:141] libmachine: (ha-454952-m03) DBG | About to run SSH command:
	I0404 21:47:02.330869   21531 main.go:141] libmachine: (ha-454952-m03) DBG | exit 0
	I0404 21:47:02.460413   21531 main.go:141] libmachine: (ha-454952-m03) DBG | SSH cmd err, output: <nil>: 
	I0404 21:47:02.460800   21531 main.go:141] libmachine: (ha-454952-m03) KVM machine creation complete!
	I0404 21:47:02.461059   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:47:02.461581   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:02.461784   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:02.461974   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:47:02.461989   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:47:02.463411   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:47:02.463429   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:47:02.463446   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:47:02.463453   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.465846   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.466279   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.466310   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.466517   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.466719   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.466916   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.467061   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.467198   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.467427   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.467440   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:47:02.571581   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:47:02.571618   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:47:02.571648   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.574609   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.575029   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.575072   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.575328   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.575580   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.575729   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.575877   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.576045   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.576242   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.576253   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:47:02.681449   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:47:02.681513   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:47:02.681520   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:47:02.681528   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.681763   21531 buildroot.go:166] provisioning hostname "ha-454952-m03"
	I0404 21:47:02.681792   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.681994   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.684978   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.685335   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.685363   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.685478   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.685659   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.685826   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.685949   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.686152   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.686350   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.686367   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952-m03 && echo "ha-454952-m03" | sudo tee /etc/hostname
	I0404 21:47:02.808594   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952-m03
	
	I0404 21:47:02.808621   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.811675   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.812015   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.812041   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.812263   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.812459   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.812609   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.812713   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.812839   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.813038   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.813071   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:47:02.932179   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:47:02.932211   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:47:02.932229   21531 buildroot.go:174] setting up certificates
	I0404 21:47:02.932248   21531 provision.go:84] configureAuth start
	I0404 21:47:02.932264   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.932561   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:02.934986   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.935325   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.935354   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.935473   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.937751   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.938068   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.938095   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.938196   21531 provision.go:143] copyHostCerts
	I0404 21:47:02.938224   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:47:02.938261   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:47:02.938273   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:47:02.938344   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:47:02.938438   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:47:02.938463   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:47:02.938471   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:47:02.938512   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:47:02.938575   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:47:02.938597   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:47:02.938610   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:47:02.938647   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:47:02.938710   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952-m03 san=[127.0.0.1 192.168.39.217 ha-454952-m03 localhost minikube]
	I0404 21:47:03.114002   21531 provision.go:177] copyRemoteCerts
	I0404 21:47:03.114058   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:47:03.114079   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.116814   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.117222   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.117250   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.117449   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.117660   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.117830   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.117979   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.207569   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:47:03.207651   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0404 21:47:03.239055   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:47:03.239122   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:47:03.269252   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:47:03.269316   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:47:03.299508   21531 provision.go:87] duration metric: took 367.244373ms to configureAuth
	I0404 21:47:03.299539   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:47:03.299802   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:03.299883   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.302546   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.302965   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.303007   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.303144   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.303334   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.303530   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.303668   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.303835   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:03.304007   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:03.304021   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:47:03.589600   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:47:03.589641   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:47:03.589654   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetURL
	I0404 21:47:03.591172   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using libvirt version 6000000
	I0404 21:47:03.593791   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.594282   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.594309   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.594507   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:47:03.594522   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:47:03.594529   21531 client.go:171] duration metric: took 25.070684836s to LocalClient.Create
	I0404 21:47:03.594549   21531 start.go:167] duration metric: took 25.070764129s to libmachine.API.Create "ha-454952"
	I0404 21:47:03.594556   21531 start.go:293] postStartSetup for "ha-454952-m03" (driver="kvm2")
	I0404 21:47:03.594568   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:47:03.594583   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.594861   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:47:03.594884   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.597411   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.597944   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.597982   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.598152   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.598348   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.598537   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.598734   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.683420   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:47:03.688599   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:47:03.688621   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:47:03.688680   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:47:03.688775   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:47:03.688791   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:47:03.688911   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:47:03.699405   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:47:03.728970   21531 start.go:296] duration metric: took 134.401187ms for postStartSetup
	I0404 21:47:03.729023   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:47:03.729580   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:03.732110   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.732509   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.732541   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.732785   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:47:03.732967   21531 start.go:128] duration metric: took 25.229435833s to createHost
	I0404 21:47:03.732989   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.735151   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.735465   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.735491   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.735597   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.735752   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.735931   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.736070   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.736247   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:03.736407   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:03.736418   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:47:03.841380   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267223.811853161
	
	I0404 21:47:03.841403   21531 fix.go:216] guest clock: 1712267223.811853161
	I0404 21:47:03.841410   21531 fix.go:229] Guest: 2024-04-04 21:47:03.811853161 +0000 UTC Remote: 2024-04-04 21:47:03.732979005 +0000 UTC m=+181.129612197 (delta=78.874156ms)
	I0404 21:47:03.841424   21531 fix.go:200] guest clock delta is within tolerance: 78.874156ms
	I0404 21:47:03.841429   21531 start.go:83] releasing machines lock for "ha-454952-m03", held for 25.338005514s
	I0404 21:47:03.841454   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.841735   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:03.844330   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.844672   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.844704   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.847227   21531 out.go:177] * Found network options:
	I0404 21:47:03.848931   21531 out.go:177]   - NO_PROXY=192.168.39.13,192.168.39.60
	W0404 21:47:03.850171   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0404 21:47:03.850197   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:47:03.850216   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.850838   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.851027   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.851124   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:47:03.851161   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	W0404 21:47:03.851221   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0404 21:47:03.851245   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:47:03.851303   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:47:03.851321   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.853996   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854291   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854426   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.854453   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854609   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.854719   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.854755   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854819   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.854932   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.855016   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.855091   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.855130   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.855343   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.855487   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:04.099816   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:47:04.106362   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:47:04.106421   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:47:04.123378   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:47:04.123410   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:47:04.123488   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:47:04.141852   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:47:04.159165   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:47:04.159229   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:47:04.177006   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:47:04.194125   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:47:04.327940   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:47:04.504790   21531 docker.go:233] disabling docker service ...
	I0404 21:47:04.504863   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:47:04.520940   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:47:04.535619   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:47:04.681131   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:47:04.832749   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:47:04.850027   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:47:04.870589   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:47:04.870640   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.883131   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:47:04.883221   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.895438   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.906843   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.920442   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:47:04.935807   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.947559   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.966537   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.979817   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:47:04.993294   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:47:04.993370   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:47:05.009157   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:47:05.020517   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:05.149876   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:47:05.294829   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:47:05.294893   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:47:05.300168   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:47:05.300230   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:47:05.304472   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:47:05.347248   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:47:05.347328   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:47:05.377891   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:47:05.413271   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:47:05.414917   21531 out.go:177]   - env NO_PROXY=192.168.39.13
	I0404 21:47:05.416432   21531 out.go:177]   - env NO_PROXY=192.168.39.13,192.168.39.60
	I0404 21:47:05.418002   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:05.420812   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:05.421166   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:05.421211   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:05.421406   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:47:05.426334   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:47:05.439120   21531 mustload.go:65] Loading cluster: ha-454952
	I0404 21:47:05.439353   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:05.439598   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:05.439640   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:05.457894   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0404 21:47:05.458324   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:05.458931   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:05.458957   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:05.459279   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:05.459522   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:47:05.461375   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:47:05.461816   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:05.461864   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:05.478759   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0404 21:47:05.479203   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:05.479725   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:05.479746   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:05.480083   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:05.480272   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:47:05.480420   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.217
	I0404 21:47:05.480433   21531 certs.go:194] generating shared ca certs ...
	I0404 21:47:05.480453   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.480601   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:47:05.480639   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:47:05.480647   21531 certs.go:256] generating profile certs ...
	I0404 21:47:05.480742   21531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:47:05.480776   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486
	I0404 21:47:05.480797   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.217 192.168.39.254]
	I0404 21:47:05.603531   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 ...
	I0404 21:47:05.603568   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486: {Name:mk0cc3bbe2d9482aa4cd27d58f26cfde4dced9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.603784   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486 ...
	I0404 21:47:05.603813   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486: {Name:mk40ea018c5e3d70413a022d8b7dd05636971c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.603934   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:47:05.604067   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:47:05.604218   21531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:47:05.604233   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:47:05.604247   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:47:05.604257   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:47:05.604270   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:47:05.604285   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:47:05.604298   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:47:05.604309   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:47:05.604322   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:47:05.604411   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:47:05.604442   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:47:05.604450   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:47:05.604470   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:47:05.604492   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:47:05.604515   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:47:05.604551   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:47:05.604576   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:47:05.604591   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:05.604603   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:47:05.604632   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:47:05.608137   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:05.608594   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:47:05.608624   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:05.608848   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:47:05.609053   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:47:05.609215   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:47:05.609485   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:47:05.688518   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0404 21:47:05.694369   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0404 21:47:05.707260   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0404 21:47:05.713260   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0404 21:47:05.726270   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0404 21:47:05.731336   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0404 21:47:05.743032   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0404 21:47:05.747381   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0404 21:47:05.759932   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0404 21:47:05.765393   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0404 21:47:05.779336   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0404 21:47:05.785583   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0404 21:47:05.801216   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:47:05.830450   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:47:05.858543   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:47:05.885952   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:47:05.915827   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0404 21:47:05.945702   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 21:47:05.973323   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:47:05.999777   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:47:06.027485   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:47:06.054894   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:47:06.080707   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:47:06.112038   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0404 21:47:06.130812   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0404 21:47:06.149359   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0404 21:47:06.168517   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0404 21:47:06.187518   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0404 21:47:06.206356   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0404 21:47:06.226931   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0404 21:47:06.244924   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:47:06.250867   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:47:06.261977   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.266832   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.266893   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.273526   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:47:06.286438   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:47:06.298083   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.303030   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.303083   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.308949   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:47:06.320340   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:47:06.331957   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.337071   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.337135   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.343633   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:47:06.355323   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:47:06.359818   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:47:06.359868   21531 kubeadm.go:928] updating node {m03 192.168.39.217 8443 v1.29.3 crio true true} ...
	I0404 21:47:06.359958   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:47:06.359992   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:47:06.360035   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:47:06.383555   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:47:06.383629   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:47:06.383703   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:47:06.405837   21531 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0404 21:47:06.405891   21531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0404 21:47:06.418113   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0404 21:47:06.418137   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:47:06.418113   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0404 21:47:06.418118   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0404 21:47:06.418181   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:47:06.418186   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:47:06.418273   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:47:06.418202   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:47:06.424007   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0404 21:47:06.424036   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0404 21:47:06.468728   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0404 21:47:06.468755   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:47:06.468767   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0404 21:47:06.468872   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:47:06.515419   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0404 21:47:06.515459   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0404 21:47:07.359563   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0404 21:47:07.370631   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0404 21:47:07.391971   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:47:07.412735   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:47:07.433476   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:47:07.438016   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:47:07.451197   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:07.598119   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:47:07.617905   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:47:07.618256   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:07.618309   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:07.634014   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0404 21:47:07.634519   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:07.634993   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:07.635011   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:07.635398   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:07.635653   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:47:07.635810   21531 start.go:316] joinCluster: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:47:07.635985   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0404 21:47:07.636014   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:47:07.638766   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:07.639250   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:47:07.639283   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:07.639408   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:47:07.639586   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:47:07.639761   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:47:07.639918   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:47:07.815167   21531 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:47:07.815224   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pn74p.cie5sg4qa194aihi --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m03 --control-plane --apiserver-advertise-address=192.168.39.217 --apiserver-bind-port=8443"
	I0404 21:47:36.022831   21531 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pn74p.cie5sg4qa194aihi --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m03 --control-plane --apiserver-advertise-address=192.168.39.217 --apiserver-bind-port=8443": (28.207584886s)
	I0404 21:47:36.022867   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0404 21:47:36.457236   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952-m03 minikube.k8s.io/updated_at=2024_04_04T21_47_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=false
	I0404 21:47:36.597917   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-454952-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0404 21:47:36.708053   21531 start.go:318] duration metric: took 29.072241272s to joinCluster
	I0404 21:47:36.708112   21531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:47:36.709890   21531 out.go:177] * Verifying Kubernetes components...
	I0404 21:47:36.708439   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:36.711385   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:37.022947   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:47:37.087214   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:47:37.087547   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0404 21:47:37.087629   21531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.13:8443
	I0404 21:47:37.087891   21531 node_ready.go:35] waiting up to 6m0s for node "ha-454952-m03" to be "Ready" ...
	I0404 21:47:37.087995   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:37.088006   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:37.088016   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:37.088026   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:37.093468   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:47:37.588763   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:37.588786   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:37.588797   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:37.588806   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:37.593379   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:38.088873   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:38.088899   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:38.088911   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:38.088917   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:38.093168   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:38.588850   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:38.588878   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:38.588888   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:38.588893   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:38.593483   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:39.088168   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:39.088189   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:39.088197   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:39.088201   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:39.092598   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:39.093497   21531 node_ready.go:53] node "ha-454952-m03" has status "Ready":"False"
	I0404 21:47:39.588772   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:39.588793   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:39.588800   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:39.588805   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:39.592822   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:40.088570   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:40.088616   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:40.088627   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:40.088633   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:40.092576   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:40.588369   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:40.588390   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:40.588397   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:40.588401   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:40.592489   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:41.088719   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:41.088740   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:41.088749   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:41.088753   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:41.093469   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:41.094277   21531 node_ready.go:53] node "ha-454952-m03" has status "Ready":"False"
	I0404 21:47:41.588611   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:41.588635   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:41.588646   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:41.588651   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:41.592703   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.088660   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.088683   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.088691   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.088696   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.093144   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.588673   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.588709   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.588720   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.588726   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.593147   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.593880   21531 node_ready.go:49] node "ha-454952-m03" has status "Ready":"True"
	I0404 21:47:42.593907   21531 node_ready.go:38] duration metric: took 5.505995976s for node "ha-454952-m03" to be "Ready" ...
	I0404 21:47:42.593918   21531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:47:42.593994   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:42.594008   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.594019   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.594025   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.601196   21531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0404 21:47:42.609597   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.609700   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-9qsz7
	I0404 21:47:42.609717   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.609727   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.609735   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.613047   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:42.613723   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.613736   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.613744   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.613748   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.616436   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.616963   21531 pod_ready.go:92] pod "coredns-76f75df574-9qsz7" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.616978   21531 pod_ready.go:81] duration metric: took 7.352588ms for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.616987   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.617030   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hsdfw
	I0404 21:47:42.617037   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.617044   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.617050   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.619751   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.620582   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.620604   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.620611   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.620624   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.623245   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.623643   21531 pod_ready.go:92] pod "coredns-76f75df574-hsdfw" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.623659   21531 pod_ready.go:81] duration metric: took 6.666239ms for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.623668   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.623709   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952
	I0404 21:47:42.623717   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.623723   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.623727   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.626447   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.626937   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.626950   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.626957   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.626962   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.629416   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.629901   21531 pod_ready.go:92] pod "etcd-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.629916   21531 pod_ready.go:81] duration metric: took 6.242973ms for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.629925   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.629975   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:47:42.629983   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.629990   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.629995   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.633192   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:42.633942   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:42.633959   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.633968   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.633976   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.636510   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.636957   21531 pod_ready.go:92] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.636970   21531 pod_ready.go:81] duration metric: took 7.039766ms for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.636981   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.789114   21531 request.go:629] Waited for 152.070592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:42.789163   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:42.789169   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.789176   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.789181   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.793499   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.989502   21531 request.go:629] Waited for 195.358854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.989578   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.989587   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.989597   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.989602   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.994226   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.189189   21531 request.go:629] Waited for 51.228709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.189245   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.189251   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.189261   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.189265   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.193616   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.388827   21531 request.go:629] Waited for 194.308739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.388890   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.388898   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.388908   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.388915   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.393180   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.637281   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.637310   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.637321   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.637328   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.641617   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.788932   21531 request.go:629] Waited for 146.432841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.789007   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.789024   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.789032   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.789036   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.793632   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:44.137617   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:44.137639   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.137647   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.137652   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.141797   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:44.188968   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:44.188989   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.188997   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.189000   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.192891   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.637905   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:44.637926   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.637933   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.637937   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.641521   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.642196   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:44.642216   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.642226   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.642232   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.645544   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.646190   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:45.137373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:45.137408   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.137429   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.137434   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.141414   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:45.142207   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:45.142220   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.142226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.142231   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.145713   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:45.637644   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:45.637669   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.637679   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.637683   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.641840   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:45.642706   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:45.642723   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.642734   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.642741   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.645561   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:46.137543   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:46.137566   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.137573   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.137577   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.141603   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:46.142472   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:46.142488   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.142495   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.142498   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.145597   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:46.637662   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:46.637689   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.637697   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.637702   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.642465   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:46.643258   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:46.643274   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.643282   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.643286   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.646810   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:46.647562   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:47.137788   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:47.137808   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.137815   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.137819   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.141794   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:47.142529   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:47.142546   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.142553   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.142559   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.145451   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:47.637232   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:47.637252   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.637259   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.637264   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.641356   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:47.642238   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:47.642254   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.642263   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.642268   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.646935   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:48.137913   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:48.137940   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.137949   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.137959   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.141476   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:48.142143   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:48.142163   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.142173   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.142179   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.145222   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:48.637719   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:48.637745   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.637756   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.637762   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.641979   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:48.642777   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:48.642799   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.642808   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.642813   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.645736   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.138231   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:49.138255   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.138266   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.138271   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.143309   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:47:49.144334   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.144355   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.144367   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.144371   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.147675   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.148667   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:49.638112   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:49.638216   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.638235   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.638256   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.641823   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.642752   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.642772   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.642783   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.642788   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.645830   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.646281   21531 pod_ready.go:92] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.646299   21531 pod_ready.go:81] duration metric: took 7.009306934s for pod "etcd-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.646325   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.646403   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952
	I0404 21:47:49.646412   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.646422   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.646430   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.649330   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.649965   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:49.649980   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.650003   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.650011   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.652978   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.653544   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.653563   21531 pod_ready.go:81] duration metric: took 7.226681ms for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.653589   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.653671   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m02
	I0404 21:47:49.653681   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.653691   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.653698   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.656742   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.657314   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:49.657331   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.657342   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.657347   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.660256   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.660788   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.660804   21531 pod_ready.go:81] duration metric: took 7.204956ms for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.660813   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.660858   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m03
	I0404 21:47:49.660866   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.660872   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.660876   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.664699   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.788774   21531 request.go:629] Waited for 122.730778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.788824   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.788837   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.788860   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.788868   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.792815   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.793287   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.793312   21531 pod_ready.go:81] duration metric: took 132.491239ms for pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.793326   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.988698   21531 request.go:629] Waited for 195.289882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:47:49.988817   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:47:49.988824   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.988837   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.988842   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.992748   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:50.188695   21531 request.go:629] Waited for 195.268681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:50.188761   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:50.188766   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.188773   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.188785   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.193289   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.193835   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.193870   21531 pod_ready.go:81] duration metric: took 400.534499ms for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.193884   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.389244   21531 request.go:629] Waited for 195.275135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:47:50.389344   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:47:50.389352   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.389363   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.389381   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.393830   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.588784   21531 request.go:629] Waited for 193.944084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:50.588873   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:50.588888   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.588898   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.588908   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.593077   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.593723   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.593740   21531 pod_ready.go:81] duration metric: took 399.848828ms for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.593749   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.788930   21531 request.go:629] Waited for 195.126625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m03
	I0404 21:47:50.788996   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m03
	I0404 21:47:50.789004   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.789014   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.789018   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.793082   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.989317   21531 request.go:629] Waited for 195.402098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:50.989393   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:50.989398   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.989405   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.989409   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.993530   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.994104   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.994127   21531 pod_ready.go:81] duration metric: took 400.370156ms for pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.994142   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.189151   21531 request.go:629] Waited for 194.949221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:47:51.189217   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:47:51.189225   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.189235   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.189246   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.193508   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.388819   21531 request.go:629] Waited for 194.281073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:51.388882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:51.388898   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.388912   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.388919   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.392793   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:51.393464   21531 pod_ready.go:92] pod "kube-proxy-6nkxm" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:51.393484   21531 pod_ready.go:81] duration metric: took 399.334643ms for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.393494   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fl4jh" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.589561   21531 request.go:629] Waited for 196.010357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fl4jh
	I0404 21:47:51.589644   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fl4jh
	I0404 21:47:51.589650   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.589658   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.589662   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.594586   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.789678   21531 request.go:629] Waited for 194.375907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:51.789737   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:51.789743   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.789750   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.789754   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.793886   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.794324   21531 pod_ready.go:92] pod "kube-proxy-fl4jh" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:51.794343   21531 pod_ready.go:81] duration metric: took 400.842302ms for pod "kube-proxy-fl4jh" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.794353   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.989491   21531 request.go:629] Waited for 195.06636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:47:51.989597   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:47:51.989616   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.989631   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.989640   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.994034   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.189045   21531 request.go:629] Waited for 194.367312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.189112   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.189118   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.189128   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.189133   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.193117   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:52.193745   21531 pod_ready.go:92] pod "kube-proxy-gjvm9" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.193765   21531 pod_ready.go:81] duration metric: took 399.404583ms for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.193778   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.388706   21531 request.go:629] Waited for 194.860122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:47:52.388836   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:47:52.388844   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.388856   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.388904   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.393000   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.588978   21531 request.go:629] Waited for 195.367456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.589030   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.589036   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.589049   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.589055   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.593712   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.594749   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.594775   21531 pod_ready.go:81] duration metric: took 400.981465ms for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.594788   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.789149   21531 request.go:629] Waited for 194.286662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:47:52.789212   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:47:52.789218   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.789225   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.789230   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.793336   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.989327   21531 request.go:629] Waited for 195.256576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:52.989402   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:52.989413   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.989422   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.989428   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.993245   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:52.993935   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.993957   21531 pod_ready.go:81] duration metric: took 399.160574ms for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.993970   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:53.189075   21531 request.go:629] Waited for 195.01053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m03
	I0404 21:47:53.189130   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m03
	I0404 21:47:53.189135   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.189142   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.189147   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.193145   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:53.389470   21531 request.go:629] Waited for 195.359511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:53.389548   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:53.389560   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.389569   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.389580   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.393665   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:53.394478   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:53.394497   21531 pod_ready.go:81] duration metric: took 400.519758ms for pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:53.394508   21531 pod_ready.go:38] duration metric: took 10.800579463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:47:53.394523   21531 api_server.go:52] waiting for apiserver process to appear ...
	I0404 21:47:53.394572   21531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:47:53.410622   21531 api_server.go:72] duration metric: took 16.702457623s to wait for apiserver process to appear ...
	I0404 21:47:53.410646   21531 api_server.go:88] waiting for apiserver healthz status ...
	I0404 21:47:53.410663   21531 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0404 21:47:53.415122   21531 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0404 21:47:53.415197   21531 round_trippers.go:463] GET https://192.168.39.13:8443/version
	I0404 21:47:53.415205   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.415216   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.415226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.416582   21531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0404 21:47:53.416723   21531 api_server.go:141] control plane version: v1.29.3
	I0404 21:47:53.416747   21531 api_server.go:131] duration metric: took 6.093013ms to wait for apiserver health ...
	I0404 21:47:53.416781   21531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 21:47:53.589448   21531 request.go:629] Waited for 172.559488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.589502   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.589514   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.589524   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.589530   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.598660   21531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0404 21:47:53.605181   21531 system_pods.go:59] 24 kube-system pods found
	I0404 21:47:53.605213   21531 system_pods.go:61] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:47:53.605220   21531 system_pods.go:61] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:47:53.605225   21531 system_pods.go:61] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:47:53.605230   21531 system_pods.go:61] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:47:53.605233   21531 system_pods.go:61] "etcd-ha-454952-m03" [d2982156-d120-43d3-baf6-853acc915bb8] Running
	I0404 21:47:53.605238   21531 system_pods.go:61] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:47:53.605242   21531 system_pods.go:61] "kindnet-7v9fp" [9bf17455-7a45-4fbf-82d2-55bebd46ee2a] Running
	I0404 21:47:53.605247   21531 system_pods.go:61] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:47:53.605250   21531 system_pods.go:61] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:47:53.605255   21531 system_pods.go:61] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:47:53.605260   21531 system_pods.go:61] "kube-apiserver-ha-454952-m03" [80a7d0c0-874f-47e4-ab91-b40d5d89e741] Running
	I0404 21:47:53.605266   21531 system_pods.go:61] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:47:53.605273   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:47:53.605279   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m03" [f9ec87de-84d2-4186-a4c3-71fe2e149fd1] Running
	I0404 21:47:53.605285   21531 system_pods.go:61] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:47:53.605290   21531 system_pods.go:61] "kube-proxy-fl4jh" [77c75925-e886-40ca-9db8-0116823489df] Running
	I0404 21:47:53.605295   21531 system_pods.go:61] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:47:53.605300   21531 system_pods.go:61] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:47:53.605309   21531 system_pods.go:61] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:47:53.605315   21531 system_pods.go:61] "kube-scheduler-ha-454952-m03" [c0e524d7-282e-4ec1-aee3-1e52867895cc] Running
	I0404 21:47:53.605323   21531 system_pods.go:61] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:47:53.605329   21531 system_pods.go:61] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:47:53.605337   21531 system_pods.go:61] "kube-vip-ha-454952-m03" [db7471a2-4620-4872-ab69-2a4722e7980a] Running
	I0404 21:47:53.605343   21531 system_pods.go:61] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:47:53.605351   21531 system_pods.go:74] duration metric: took 188.55864ms to wait for pod list to return data ...
	I0404 21:47:53.605363   21531 default_sa.go:34] waiting for default service account to be created ...
	I0404 21:47:53.788769   21531 request.go:629] Waited for 183.337016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:47:53.788822   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:47:53.788828   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.788835   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.788839   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.792760   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:53.792888   21531 default_sa.go:45] found service account: "default"
	I0404 21:47:53.792908   21531 default_sa.go:55] duration metric: took 187.534022ms for default service account to be created ...
	I0404 21:47:53.792922   21531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 21:47:53.989300   21531 request.go:629] Waited for 196.315146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.989350   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.989355   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.989362   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.989366   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.997538   21531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0404 21:47:54.004474   21531 system_pods.go:86] 24 kube-system pods found
	I0404 21:47:54.004505   21531 system_pods.go:89] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:47:54.004510   21531 system_pods.go:89] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:47:54.004515   21531 system_pods.go:89] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:47:54.004519   21531 system_pods.go:89] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:47:54.004523   21531 system_pods.go:89] "etcd-ha-454952-m03" [d2982156-d120-43d3-baf6-853acc915bb8] Running
	I0404 21:47:54.004527   21531 system_pods.go:89] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:47:54.004531   21531 system_pods.go:89] "kindnet-7v9fp" [9bf17455-7a45-4fbf-82d2-55bebd46ee2a] Running
	I0404 21:47:54.004536   21531 system_pods.go:89] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:47:54.004540   21531 system_pods.go:89] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:47:54.004545   21531 system_pods.go:89] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:47:54.004549   21531 system_pods.go:89] "kube-apiserver-ha-454952-m03" [80a7d0c0-874f-47e4-ab91-b40d5d89e741] Running
	I0404 21:47:54.004554   21531 system_pods.go:89] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:47:54.004558   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:47:54.004562   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m03" [f9ec87de-84d2-4186-a4c3-71fe2e149fd1] Running
	I0404 21:47:54.004566   21531 system_pods.go:89] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:47:54.004571   21531 system_pods.go:89] "kube-proxy-fl4jh" [77c75925-e886-40ca-9db8-0116823489df] Running
	I0404 21:47:54.004574   21531 system_pods.go:89] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:47:54.004582   21531 system_pods.go:89] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:47:54.004586   21531 system_pods.go:89] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:47:54.004590   21531 system_pods.go:89] "kube-scheduler-ha-454952-m03" [c0e524d7-282e-4ec1-aee3-1e52867895cc] Running
	I0404 21:47:54.004594   21531 system_pods.go:89] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:47:54.004600   21531 system_pods.go:89] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:47:54.004603   21531 system_pods.go:89] "kube-vip-ha-454952-m03" [db7471a2-4620-4872-ab69-2a4722e7980a] Running
	I0404 21:47:54.004610   21531 system_pods.go:89] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:47:54.004616   21531 system_pods.go:126] duration metric: took 211.688695ms to wait for k8s-apps to be running ...
	I0404 21:47:54.004625   21531 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 21:47:54.004667   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:47:54.021779   21531 system_svc.go:56] duration metric: took 17.142344ms WaitForService to wait for kubelet
	I0404 21:47:54.021813   21531 kubeadm.go:576] duration metric: took 17.31364983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:47:54.021832   21531 node_conditions.go:102] verifying NodePressure condition ...
	I0404 21:47:54.189232   21531 request.go:629] Waited for 167.316748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes
	I0404 21:47:54.189280   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes
	I0404 21:47:54.189285   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:54.189293   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:54.189297   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:54.193610   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:54.194644   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194665   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194675   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194678   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194681   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194684   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194688   21531 node_conditions.go:105] duration metric: took 172.852606ms to run NodePressure ...
	I0404 21:47:54.194699   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:47:54.194717   21531 start.go:254] writing updated cluster config ...
	I0404 21:47:54.195015   21531 ssh_runner.go:195] Run: rm -f paused
	I0404 21:47:54.247265   21531 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 21:47:54.249516   21531 out.go:177] * Done! kubectl is now configured to use "ha-454952" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.116363603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267485116332037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5659f3d-5b56-4258-92e7-b7b589d78d05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.116937016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aeeb6b94-6591-4992-9dfb-ec8befdc351c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.117015463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aeeb6b94-6591-4992-9dfb-ec8befdc351c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.117260041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aeeb6b94-6591-4992-9dfb-ec8befdc351c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.162524370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa2dc904-3a95-4f11-af04-82443ee206f0 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.162629172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa2dc904-3a95-4f11-af04-82443ee206f0 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.164492623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e019f942-9419-419e-89a6-1839ee4617be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.165010587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267485164984393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e019f942-9419-419e-89a6-1839ee4617be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.165765420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de95782b-bd6d-4758-ae7b-8141e525563a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.165825287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de95782b-bd6d-4758-ae7b-8141e525563a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.166059212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de95782b-bd6d-4758-ae7b-8141e525563a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.219753381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=612b1b83-4570-457d-ad80-6fb76d315bbe name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.219828947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=612b1b83-4570-457d-ad80-6fb76d315bbe name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.222959405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfe07855-8d89-49b3-b6b5-0c1893bdeb14 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.223381522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267485223354420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfe07855-8d89-49b3-b6b5-0c1893bdeb14 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.224070924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b2c9c6d-cd9d-4011-a1b6-ce1874ec39b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.224145200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b2c9c6d-cd9d-4011-a1b6-ce1874ec39b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.224415364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b2c9c6d-cd9d-4011-a1b6-ce1874ec39b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.264265029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4addc9e3-cfbd-4679-86fc-8cac67071ad9 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.264336707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4addc9e3-cfbd-4679-86fc-8cac67071ad9 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.273215619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd24f4b9-f122-4581-bdef-88a482514273 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.273941547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267485273914955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd24f4b9-f122-4581-bdef-88a482514273 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.274466681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bab2ddb7-1cd0-4c47-b256-59454d65b7c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.274526035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bab2ddb7-1cd0-4c47-b256-59454d65b7c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:51:25 ha-454952 crio[685]: time="2024-04-04 21:51:25.274907608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bab2ddb7-1cd0-4c47-b256-59454d65b7c0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85478f2f51e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2c8e166c4509c       busybox-7fdf7869d9-q56fw
	8f910060a9886       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e1823b9750831       storage-provisioner
	2f6afcac0a6b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   b1934889b30c3       coredns-76f75df574-9qsz7
	b3fc8d8ef023d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0b786dbf91033       coredns-76f75df574-hsdfw
	2a3b245ea3482       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   90fe92fd101c4       kindnet-v8wv6
	90c39a2687464       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                0                   2748de75b7d2d       kube-proxy-gjvm9
	a0c8fa7da2804       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   204ef6b79c8cb       kube-vip-ha-454952
	c3820dd809544       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   2d41ace5ee35f       kube-controller-manager-ha-454952
	e9faec0816d4c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   9f1d5c3d0af96       kube-scheduler-ha-454952
	a94e56804eb2e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   a29d53a59569a       kube-apiserver-ha-454952
	72549bccc4ca2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   92d02e4d213b3       etcd-ha-454952
	
	
	==> coredns [2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f] <==
	[INFO] 10.244.1.2:55731 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112498s
	[INFO] 10.244.1.2:51841 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001879121s
	[INFO] 10.244.2.2:33882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001666s
	[INFO] 10.244.2.2:59301 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003616562s
	[INFO] 10.244.2.2:38692 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000240884s
	[INFO] 10.244.2.2:49348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146448s
	[INFO] 10.244.2.2:48867 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138618s
	[INFO] 10.244.0.4:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070304s
	[INFO] 10.244.1.2:58936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144716s
	[INFO] 10.244.1.2:43170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002050369s
	[INFO] 10.244.1.2:59811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149418s
	[INFO] 10.244.1.2:58173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001389488s
	[INFO] 10.244.1.2:50742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078385s
	[INFO] 10.244.1.2:46973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077499s
	[INFO] 10.244.2.2:43785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153069s
	[INFO] 10.244.2.2:37406 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074939s
	[INFO] 10.244.0.4:41091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141133s
	[INFO] 10.244.0.4:44476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202801s
	[INFO] 10.244.0.4:45234 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104556s
	[INFO] 10.244.1.2:39647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182075s
	[INFO] 10.244.1.2:50588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151414s
	[INFO] 10.244.1.2:41606 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195991s
	[INFO] 10.244.2.2:53483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232191s
	[INFO] 10.244.2.2:60437 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132599s
	[INFO] 10.244.1.2:51965 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166052s
	
	
	==> coredns [b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c] <==
	[INFO] 10.244.2.2:52520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000300772s
	[INFO] 10.244.2.2:56049 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228681s
	[INFO] 10.244.2.2:38128 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003078889s
	[INFO] 10.244.0.4:60519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135291s
	[INFO] 10.244.0.4:43464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002071208s
	[INFO] 10.244.0.4:51293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085331s
	[INFO] 10.244.0.4:55321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087493s
	[INFO] 10.244.0.4:59685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001579648s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157393s
	[INFO] 10.244.0.4:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109886s
	[INFO] 10.244.1.2:59156 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010739s
	[INFO] 10.244.1.2:53747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144738s
	[INFO] 10.244.2.2:48166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144032s
	[INFO] 10.244.2.2:36301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211342s
	[INFO] 10.244.0.4:34383 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072486s
	[INFO] 10.244.1.2:47623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275299s
	[INFO] 10.244.2.2:36199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000346157s
	[INFO] 10.244.2.2:51401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193332s
	[INFO] 10.244.0.4:48691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082711s
	[INFO] 10.244.0.4:37702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047018s
	[INFO] 10.244.0.4:59456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.0.4:56014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070317s
	[INFO] 10.244.1.2:47145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204326s
	[INFO] 10.244.1.2:36898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127022s
	[INFO] 10.244.1.2:42608 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109931s
	
	
	==> describe nodes <==
	Name:               ha-454952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:51:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    ha-454952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bcaf06686d84ca785ca1e79fc3ee92b
	  System UUID:                9bcaf066-86d8-4ca7-85ca-1e79fc3ee92b
	  Boot ID:                    00b02ff9-8c43-4004-ab1c-4fcde5b8a674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q56fw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-76f75df574-9qsz7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m26s
	  kube-system                 coredns-76f75df574-hsdfw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m26s
	  kube-system                 etcd-ha-454952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m40s
	  kube-system                 kindnet-v8wv6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m27s
	  kube-system                 kube-apiserver-ha-454952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-controller-manager-ha-454952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-proxy-gjvm9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-scheduler-ha-454952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-vip-ha-454952                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  Starting                 6m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m46s (x7 over 6m47s)  kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m46s (x8 over 6m47s)  kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x8 over 6m47s)  kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m39s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s                  kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s                  kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s                  kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal  NodeReady                6m24s                  kubelet          Node ha-454952 status is now: NodeReady
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	
	
	Name:               ha-454952-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:46:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:49:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-454952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f458ea60975d458aa9cb6e203993b49a
	  System UUID:                f458ea60-975d-458a-a9cb-6e203993b49a
	  Boot ID:                    45704b3c-2202-4d10-9e3c-5b89634b1116
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rshl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-454952-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-7c9dv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-454952-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-454952-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-proxy-6nkxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-454952-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-454952-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  NodeNotReady             101s                 node-controller  Node ha-454952-m02 status is now: NodeNotReady
	
	
	Name:               ha-454952-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_47_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:51:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-454952-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b7367a50ec545c4ae6fb446cfb73753
	  System UUID:                4b7367a5-0ec5-45c4-ae6f-b446cfb73753
	  Boot ID:                    2b997353-af0c-4d49-8d13-945875ed8eb6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-8qf48                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-454952-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-7v9fp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m55s
	  kube-system                 kube-apiserver-ha-454952-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ha-454952-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-fl4jh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-scheduler-ha-454952-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-vip-ha-454952-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node ha-454952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	
	
	Name:               ha-454952-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_48_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-454952-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eaf323303c74873975b4953c592319b
	  System UUID:                0eaf3233-03c7-4873-975b-4953c592319b
	  Boot ID:                    4fc91205-3a73-4a27-9638-4008c1292325
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mmgj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-5j62j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-454952-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 4 21:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053353] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.565978] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.745346] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.640914] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.710951] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.059484] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060191] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.177107] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.307912] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.603000] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.064613] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478091] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.520027] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.408849] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.092051] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.761594] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 21:46] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3] <==
	{"level":"warn","ts":"2024-04-04T21:51:25.379018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.389983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.460304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.478149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.596412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.60753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.646056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.655931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.661913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.678987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.679186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.691011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.707657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.72014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.727752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.730917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.740143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.747742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.755236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.759268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.762234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.767592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.774763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.778202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:51:25.78323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:51:25 up 7 min,  0 users,  load average: 0.38, 0.55, 0.29
	Linux ha-454952 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69] <==
	I0404 21:50:51.333370       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:51:01.340419       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:51:01.340472       1 main.go:227] handling current node
	I0404 21:51:01.340485       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:51:01.340491       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:51:01.340606       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:51:01.340637       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:51:01.340874       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:51:01.340907       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:51:11.360331       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:51:11.360637       1 main.go:227] handling current node
	I0404 21:51:11.360655       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:51:11.360662       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:51:11.360864       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:51:11.360907       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:51:11.360988       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:51:11.361025       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:51:21.376637       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:51:21.376733       1 main.go:227] handling current node
	I0404 21:51:21.376759       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:51:21.376765       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:51:21.376922       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:51:21.376958       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:51:21.377007       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:51:21.377032       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1] <==
	I0404 21:44:42.975292       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0404 21:44:42.975316       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 21:44:42.992175       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 21:44:42.992224       1 aggregator.go:165] initial CRD sync complete...
	I0404 21:44:42.992231       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 21:44:42.992236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 21:44:42.992243       1 cache.go:39] Caches are synced for autoregister controller
	I0404 21:44:43.010463       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 21:44:43.020528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 21:44:43.882866       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0404 21:44:43.891393       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0404 21:44:43.891436       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 21:44:44.767100       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 21:44:44.818568       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0404 21:44:44.898133       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0404 21:44:44.905347       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0404 21:44:44.919860       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13]
	I0404 21:44:44.921482       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 21:44:44.926803       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0404 21:44:46.513153       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0404 21:44:46.539446       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0404 21:44:46.550519       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0404 21:44:58.606495       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0404 21:44:58.963925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W0404 21:49:14.925858       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.217]
	
	
	==> kube-controller-manager [c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65] <==
	I0404 21:47:56.660364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="43.803µs"
	I0404 21:47:58.588132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.452482ms"
	I0404 21:47:58.588860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="189.895µs"
	I0404 21:47:58.814779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.497163ms"
	I0404 21:47:58.814867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="39.61µs"
	I0404 21:47:59.634010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.507634ms"
	I0404 21:47:59.634872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.366µs"
	E0404 21:48:32.982249       1 certificate_controller.go:146] Sync csr-7qwg4 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-7qwg4": the object has been modified; please apply your changes to the latest version and try again
	E0404 21:48:32.985440       1 certificate_controller.go:146] Sync csr-7qwg4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-7qwg4": the object has been modified; please apply your changes to the latest version and try again
	I0404 21:48:33.275529       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-454952-m04\" does not exist"
	I0404 21:48:33.357859       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rxzk6"
	I0404 21:48:33.359611       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vkhx6"
	I0404 21:48:33.370013       1 range_allocator.go:380] "Set node PodCIDR" node="ha-454952-m04" podCIDRs=["10.244.3.0/24"]
	I0404 21:48:33.503662       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-vkhx6"
	I0404 21:48:33.535295       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-97shf"
	I0404 21:48:33.585806       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-ctxj5"
	E0404 21:48:33.614165       1 daemon_controller.go:326] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2e973454-81e4-41a4-9525-61d5c5586ff2", ResourceVersion:"988", Generation:1, CreationTimestamp:time.Date(2024, time.April, 4, 21, 44, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00100b200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1
, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVol
umeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0020948c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017dc318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVo
lumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:
v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017dc330), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPers
istentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.29.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00100b240)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"ku
be-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a37aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c16b28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", No
deSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00050ed20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil
), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001d4b3e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c16b80)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0404 21:48:33.642548       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-rxzk6"
	I0404 21:48:38.010841       1 event.go:376] "Event occurred" object="ha-454952-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller"
	I0404 21:48:38.027056       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-454952-m04"
	I0404 21:48:43.311557       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-454952-m04"
	I0404 21:49:44.309356       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-454952-m04"
	I0404 21:49:44.360889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.759619ms"
	I0404 21:49:44.360993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.841µs"
	
	
	==> kube-proxy [90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05] <==
	I0404 21:44:59.909318       1 server_others.go:72] "Using iptables proxy"
	I0404 21:44:59.936579       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0404 21:44:59.996411       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 21:44:59.996464       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 21:44:59.996479       1 server_others.go:168] "Using iptables Proxier"
	I0404 21:45:00.004335       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 21:45:00.004600       1 server.go:865] "Version info" version="v1.29.3"
	I0404 21:45:00.004636       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:45:00.009109       1 config.go:315] "Starting node config controller"
	I0404 21:45:00.009536       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 21:45:00.019011       1 config.go:188] "Starting service config controller"
	I0404 21:45:00.019046       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 21:45:00.019065       1 config.go:97] "Starting endpoint slice config controller"
	I0404 21:45:00.019069       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 21:45:00.110411       1 shared_informer.go:318] Caches are synced for node config
	I0404 21:45:00.121939       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0404 21:45:00.122095       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048] <==
	E0404 21:44:44.511804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 21:44:44.515508       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0404 21:44:44.515572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0404 21:44:45.956811       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 21:47:55.195896       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="96ea0da6-790d-4093-8ac2-25d90308000e" pod="default/busybox-7fdf7869d9-8qf48" assumedNode="ha-454952-m03" currentNode="ha-454952-m02"
	E0404 21:47:55.220542       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-8qf48\": pod busybox-7fdf7869d9-8qf48 is already assigned to node \"ha-454952-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-8qf48" node="ha-454952-m02"
	E0404 21:47:55.220750       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 96ea0da6-790d-4093-8ac2-25d90308000e(default/busybox-7fdf7869d9-8qf48) was assumed on ha-454952-m02 but assigned to ha-454952-m03"
	E0404 21:47:55.220835       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-8qf48\": pod busybox-7fdf7869d9-8qf48 is already assigned to node \"ha-454952-m03\"" pod="default/busybox-7fdf7869d9-8qf48"
	I0404 21:47:55.220980       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-8qf48" node="ha-454952-m03"
	E0404 21:47:55.274958       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-pcb8c\": pod busybox-7fdf7869d9-pcb8c is already assigned to node \"ha-454952\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-pcb8c" node="ha-454952"
	E0404 21:47:55.275055       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 27044b2b-8296-4cca-811e-4a0584edabbf(default/busybox-7fdf7869d9-pcb8c) wasn't assumed so cannot be forgotten"
	E0404 21:47:55.275105       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-pcb8c\": pod busybox-7fdf7869d9-pcb8c is already assigned to node \"ha-454952\"" pod="default/busybox-7fdf7869d9-pcb8c"
	I0404 21:47:55.275170       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-pcb8c" node="ha-454952"
	E0404 21:48:33.418226       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vkhx6\": pod kindnet-vkhx6 is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vkhx6" node="ha-454952-m04"
	E0404 21:48:33.418326       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 5814bfb4-ad69-4d7b-b7e9-5870b1db6184(kube-system/kindnet-vkhx6) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.418377       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vkhx6\": pod kindnet-vkhx6 is already assigned to node \"ha-454952-m04\"" pod="kube-system/kindnet-vkhx6"
	I0404 21:48:33.418403       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vkhx6" node="ha-454952-m04"
	E0404 21:48:33.418771       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rxzk6\": pod kube-proxy-rxzk6 is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rxzk6" node="ha-454952-m04"
	E0404 21:48:33.418937       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod fe3ae4e5-f3df-4635-8cb3-056592eac2a2(kube-system/kube-proxy-rxzk6) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.418994       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rxzk6\": pod kube-proxy-rxzk6 is already assigned to node \"ha-454952-m04\"" pod="kube-system/kube-proxy-rxzk6"
	I0404 21:48:33.419027       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rxzk6" node="ha-454952-m04"
	E0404 21:48:33.463113       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-97shf\": pod kube-proxy-97shf is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-97shf" node="ha-454952-m04"
	E0404 21:48:33.463268       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 1765307b-c5ff-43e2-909d-b541f9cd6f85(kube-system/kube-proxy-97shf) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.463492       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-97shf\": pod kube-proxy-97shf is already assigned to node \"ha-454952-m04\"" pod="kube-system/kube-proxy-97shf"
	I0404 21:48:33.466246       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-97shf" node="ha-454952-m04"
	
	
	==> kubelet <==
	Apr 04 21:47:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.259940    1393 topology_manager.go:215] "Topology Admit Handler" podUID="27044b2b-8296-4cca-811e-4a0584edabbf" podNamespace="default" podName="busybox-7fdf7869d9-pcb8c"
	Apr 04 21:47:55 ha-454952 kubelet[1393]: E0404 21:47:55.372087    1393 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-x6nj4], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-7fdf7869d9-pcb8c" podUID="27044b2b-8296-4cca-811e-4a0584edabbf"
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.420394    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nj4\" (UniqueName: \"kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4\") pod \"busybox-7fdf7869d9-pcb8c\" (UID: \"27044b2b-8296-4cca-811e-4a0584edabbf\") " pod="default/busybox-7fdf7869d9-pcb8c"
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.621437    1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6nj4\" (UniqueName: \"kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4\") pod \"27044b2b-8296-4cca-811e-4a0584edabbf\" (UID: \"27044b2b-8296-4cca-811e-4a0584edabbf\") "
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.631457    1393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4" (OuterVolumeSpecName: "kube-api-access-x6nj4") pod "27044b2b-8296-4cca-811e-4a0584edabbf" (UID: "27044b2b-8296-4cca-811e-4a0584edabbf"). InnerVolumeSpecName "kube-api-access-x6nj4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.722472    1393 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x6nj4\" (UniqueName: \"kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4\") on node \"ha-454952\" DevicePath \"\""
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.622638    1393 topology_manager.go:215] "Topology Admit Handler" podUID="53780518-8100-4f1a-993c-fb9c76dfecb1" podNamespace="default" podName="busybox-7fdf7869d9-q56fw"
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.628154    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmndm\" (UniqueName: \"kubernetes.io/projected/53780518-8100-4f1a-993c-fb9c76dfecb1-kube-api-access-tmndm\") pod \"busybox-7fdf7869d9-q56fw\" (UID: \"53780518-8100-4f1a-993c-fb9c76dfecb1\") " pod="default/busybox-7fdf7869d9-q56fw"
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.708200    1393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27044b2b-8296-4cca-811e-4a0584edabbf" path="/var/lib/kubelet/pods/27044b2b-8296-4cca-811e-4a0584edabbf/volumes"
	Apr 04 21:48:46 ha-454952 kubelet[1393]: E0404 21:48:46.749195    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:48:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:49:46 ha-454952 kubelet[1393]: E0404 21:49:46.750885    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:49:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:50:46 ha-454952 kubelet[1393]: E0404 21:50:46.750415    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:50:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-454952 -n ha-454952
helpers_test.go:261: (dbg) Run:  kubectl --context ha-454952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (3.19354336s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:30.501260   25986 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:30.501383   25986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:30.501394   25986 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:30.501399   25986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:30.501577   25986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:30.501771   25986 out.go:298] Setting JSON to false
	I0404 21:51:30.501799   25986 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:30.501857   25986 notify.go:220] Checking for updates...
	I0404 21:51:30.502162   25986 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:30.502176   25986 status.go:255] checking status of ha-454952 ...
	I0404 21:51:30.502517   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.502581   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.518732   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0404 21:51:30.519291   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.519879   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.519903   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.520286   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.520502   25986 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:30.522136   25986 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:30.522155   25986 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:30.522443   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.522477   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.537777   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0404 21:51:30.538292   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.538854   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.538883   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.539249   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.539430   25986 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:30.542525   25986 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:30.542947   25986 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:30.542975   25986 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:30.543087   25986 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:30.543362   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.543408   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.558022   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0404 21:51:30.558449   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.558870   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.558890   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.559253   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.559432   25986 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:30.559618   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:30.559644   25986 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:30.562109   25986 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:30.562567   25986 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:30.562589   25986 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:30.562735   25986 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:30.562910   25986 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:30.563089   25986 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:30.563219   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:30.649164   25986 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:30.655938   25986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:30.671299   25986 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:30.671336   25986 api_server.go:166] Checking apiserver status ...
	I0404 21:51:30.671378   25986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:30.685147   25986 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:30.695096   25986 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:30.695158   25986 ssh_runner.go:195] Run: ls
	I0404 21:51:30.700546   25986 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:30.705967   25986 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:30.705988   25986 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:30.705998   25986 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:30.706019   25986 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:30.706310   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.706341   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.721935   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0404 21:51:30.722423   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.722965   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.722993   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.723309   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.723501   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:30.725085   25986 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:30.725101   25986 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:30.725375   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.725406   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.739852   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0404 21:51:30.740354   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.740861   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.740895   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.741245   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.741438   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:30.744317   25986 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:30.744779   25986 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:30.744803   25986 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:30.745055   25986 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:30.745346   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:30.745382   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:30.760694   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0404 21:51:30.761169   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:30.761675   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:30.761701   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:30.762035   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:30.762247   25986 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:30.762459   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:30.762482   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:30.764899   25986 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:30.765510   25986 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:30.765545   25986 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:30.765741   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:30.765978   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:30.766150   25986 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:30.766280   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:51:33.284521   25986 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:33.284727   25986 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:51:33.284757   25986 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:33.284771   25986 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:51:33.284798   25986 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:33.284812   25986 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:51:33.285152   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.285222   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.299881   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32965
	I0404 21:51:33.300346   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.300822   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.300843   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.301197   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.301412   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:51:33.303016   25986 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:51:33.303033   25986 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:33.303409   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.303446   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.317838   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0404 21:51:33.318223   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.318660   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.318675   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.318945   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.319126   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:51:33.322239   25986 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:33.322711   25986 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:33.322752   25986 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:33.322949   25986 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:33.323285   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.323331   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.338008   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38943
	I0404 21:51:33.338474   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.338932   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.338951   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.339250   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.339467   25986 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:51:33.339662   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:33.339689   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:51:33.342776   25986 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:33.343334   25986 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:33.343375   25986 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:33.343541   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:51:33.343764   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:51:33.343977   25986 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:51:33.344139   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:51:33.424910   25986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:33.442824   25986 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:33.442851   25986 api_server.go:166] Checking apiserver status ...
	I0404 21:51:33.442888   25986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:33.457994   25986 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:51:33.470771   25986 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:33.470858   25986 ssh_runner.go:195] Run: ls
	I0404 21:51:33.476855   25986 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:33.482580   25986 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:33.482612   25986 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:51:33.482624   25986 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:33.482643   25986 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:51:33.483053   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.483101   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.498352   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0404 21:51:33.498823   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.499340   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.499360   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.499678   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.499890   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:51:33.501597   25986 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:51:33.501612   25986 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:33.501893   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.501942   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.516809   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I0404 21:51:33.517285   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.517847   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.517869   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.518198   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.518376   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:51:33.521142   25986 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:33.521622   25986 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:33.521663   25986 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:33.521797   25986 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:33.522071   25986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:33.522104   25986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:33.538418   25986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0404 21:51:33.538850   25986 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:33.539339   25986 main.go:141] libmachine: Using API Version  1
	I0404 21:51:33.539373   25986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:33.539797   25986 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:33.540088   25986 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:51:33.540297   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:33.540323   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:51:33.544010   25986 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:33.544628   25986 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:33.544655   25986 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:33.544872   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:51:33.545085   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:51:33.545257   25986 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:51:33.545398   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:51:33.624758   25986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:33.639475   25986 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (4.774356551s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:35.071971   26069 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:35.072249   26069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:35.072262   26069 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:35.072269   26069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:35.072563   26069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:35.072829   26069 out.go:298] Setting JSON to false
	I0404 21:51:35.072873   26069 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:35.072931   26069 notify.go:220] Checking for updates...
	I0404 21:51:35.073357   26069 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:35.073372   26069 status.go:255] checking status of ha-454952 ...
	I0404 21:51:35.073873   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.073927   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.092858   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0404 21:51:35.093237   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.093802   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.093831   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.094215   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.094455   26069 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:35.096177   26069 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:35.096194   26069 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:35.096465   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.096510   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.111297   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0404 21:51:35.111720   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.112265   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.112289   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.112720   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.112896   26069 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:35.115753   26069 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:35.116207   26069 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:35.116246   26069 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:35.116396   26069 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:35.116724   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.116764   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.131385   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0404 21:51:35.131857   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.132426   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.132456   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.132777   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.133037   26069 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:35.133274   26069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:35.133307   26069 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:35.136300   26069 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:35.136760   26069 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:35.136786   26069 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:35.136874   26069 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:35.137037   26069 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:35.137195   26069 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:35.137355   26069 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:35.230535   26069 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:35.238360   26069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:35.254107   26069 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:35.254152   26069 api_server.go:166] Checking apiserver status ...
	I0404 21:51:35.254192   26069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:35.269565   26069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:35.282217   26069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:35.282289   26069 ssh_runner.go:195] Run: ls
	I0404 21:51:35.287757   26069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:35.292774   26069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:35.292801   26069 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:35.292815   26069 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:35.292837   26069 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:35.293247   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.293292   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.308944   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0404 21:51:35.309414   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.309952   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.309970   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.310378   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.310592   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:35.312330   26069 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:35.312346   26069 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:35.312742   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.312786   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.327688   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0404 21:51:35.328063   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.328543   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.328565   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.328891   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.329075   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:35.332227   26069 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:35.332736   26069 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:35.332756   26069 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:35.332949   26069 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:35.333360   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:35.333409   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:35.349054   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0404 21:51:35.349485   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:35.349943   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:35.349970   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:35.350438   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:35.350629   26069 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:35.350828   26069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:35.350847   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:35.353737   26069 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:35.354131   26069 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:35.354153   26069 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:35.354367   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:35.354563   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:35.354731   26069 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:35.354910   26069 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:51:36.356418   26069 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:36.356482   26069 retry.go:31] will retry after 197.843102ms: dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:39.428427   26069 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:39.428526   26069 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:51:39.428545   26069 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:39.428552   26069 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:51:39.428572   26069 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:39.428579   26069 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:51:39.428867   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.428923   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.444223   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0404 21:51:39.444807   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.445418   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.445436   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.445846   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.446124   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:51:39.447851   26069 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:51:39.447868   26069 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:39.448185   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.448228   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.462780   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I0404 21:51:39.463223   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.463669   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.463694   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.464073   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.464283   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:51:39.467479   26069 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:39.467888   26069 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:39.467913   26069 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:39.468041   26069 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:39.468361   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.468396   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.484746   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0404 21:51:39.485190   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.485603   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.485626   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.486005   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.486261   26069 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:51:39.486532   26069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:39.486553   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:51:39.489715   26069 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:39.490288   26069 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:39.490315   26069 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:39.490489   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:51:39.490673   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:51:39.490865   26069 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:51:39.490990   26069 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:51:39.572140   26069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:39.589240   26069 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:39.589270   26069 api_server.go:166] Checking apiserver status ...
	I0404 21:51:39.589304   26069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:39.603902   26069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:51:39.615622   26069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:39.615672   26069 ssh_runner.go:195] Run: ls
	I0404 21:51:39.621019   26069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:39.625636   26069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:39.625666   26069 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:51:39.625679   26069 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:39.625705   26069 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:51:39.626024   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.626089   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.641207   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I0404 21:51:39.641678   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.642319   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.642339   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.642675   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.642894   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:51:39.644588   26069 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:51:39.644606   26069 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:39.644877   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.644918   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.660595   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0404 21:51:39.661003   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.661479   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.661502   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.661873   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.662027   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:51:39.664746   26069 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:39.665216   26069 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:39.665237   26069 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:39.665377   26069 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:39.665655   26069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:39.665690   26069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:39.680842   26069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0404 21:51:39.681172   26069 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:39.681735   26069 main.go:141] libmachine: Using API Version  1
	I0404 21:51:39.681754   26069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:39.682124   26069 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:39.682349   26069 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:51:39.682564   26069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:39.682585   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:51:39.685707   26069 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:39.686158   26069 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:39.686189   26069 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:39.686341   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:51:39.686532   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:51:39.686688   26069 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:51:39.686842   26069 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:51:39.768726   26069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:39.784187   26069 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (5.023013718s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:41.295535   26175 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:41.295665   26175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:41.295674   26175 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:41.295677   26175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:41.295856   26175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:41.296017   26175 out.go:298] Setting JSON to false
	I0404 21:51:41.296040   26175 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:41.296158   26175 notify.go:220] Checking for updates...
	I0404 21:51:41.296439   26175 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:41.296454   26175 status.go:255] checking status of ha-454952 ...
	I0404 21:51:41.296837   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.296895   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.319713   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0404 21:51:41.320191   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.320923   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.320963   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.321377   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.321578   26175 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:41.323247   26175 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:41.323264   26175 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:41.323527   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.323570   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.339715   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39153
	I0404 21:51:41.340193   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.340690   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.340720   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.341079   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.341304   26175 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:41.344102   26175 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:41.344560   26175 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:41.344588   26175 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:41.344726   26175 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:41.345024   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.345073   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.360253   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0404 21:51:41.360696   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.361147   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.361167   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.361539   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.361759   26175 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:41.362006   26175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:41.362032   26175 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:41.364754   26175 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:41.365151   26175 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:41.365187   26175 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:41.365303   26175 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:41.365512   26175 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:41.365728   26175 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:41.365900   26175 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:41.456547   26175 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:41.463467   26175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:41.480233   26175 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:41.480267   26175 api_server.go:166] Checking apiserver status ...
	I0404 21:51:41.480333   26175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:41.495321   26175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:41.507401   26175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:41.507452   26175 ssh_runner.go:195] Run: ls
	I0404 21:51:41.512538   26175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:41.516804   26175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:41.516832   26175 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:41.516855   26175 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:41.516881   26175 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:41.517173   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.517205   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.532407   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0404 21:51:41.532858   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.533422   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.533450   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.533826   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.534078   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:41.535745   26175 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:41.535762   26175 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:41.536055   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.536086   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.553026   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0404 21:51:41.553405   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.553901   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.553932   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.554244   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.554456   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:41.557587   26175 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:41.558175   26175 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:41.558210   26175 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:41.558374   26175 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:41.558811   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:41.558857   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:41.575919   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0404 21:51:41.576348   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:41.576823   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:41.576844   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:41.577178   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:41.577384   26175 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:41.577572   26175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:41.577602   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:41.580197   26175 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:41.580562   26175 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:41.580591   26175 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:41.580708   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:41.580975   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:41.581140   26175 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:41.581286   26175 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:51:42.500368   26175 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:42.500423   26175 retry.go:31] will retry after 333.94021ms: dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:45.892399   26175 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:45.892495   26175 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:51:45.892540   26175 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:45.892548   26175 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:51:45.892576   26175 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:45.892585   26175 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:51:45.892861   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:45.892911   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:45.908276   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0404 21:51:45.908708   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:45.909210   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:45.909230   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:45.909581   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:45.909784   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:51:45.911408   26175 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:51:45.911424   26175 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:45.911812   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:45.911847   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:45.926678   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0404 21:51:45.927145   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:45.927682   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:45.927699   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:45.928150   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:45.928390   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:51:45.931502   26175 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:45.931952   26175 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:45.931981   26175 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:45.932181   26175 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:45.932490   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:45.932533   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:45.947403   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0404 21:51:45.947782   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:45.948274   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:45.948295   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:45.948605   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:45.948812   26175 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:51:45.949055   26175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:45.949075   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:51:45.951935   26175 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:45.952342   26175 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:45.952383   26175 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:45.952520   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:51:45.952705   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:51:45.952908   26175 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:51:45.953058   26175 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:51:46.044767   26175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:46.062360   26175 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:46.062392   26175 api_server.go:166] Checking apiserver status ...
	I0404 21:51:46.062430   26175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:46.078323   26175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:51:46.090331   26175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:46.090393   26175 ssh_runner.go:195] Run: ls
	I0404 21:51:46.095840   26175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:46.100689   26175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:46.100723   26175 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:51:46.100734   26175 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:46.100754   26175 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:51:46.101159   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:46.101211   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:46.118513   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0404 21:51:46.118989   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:46.119500   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:46.119527   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:46.119882   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:46.120085   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:51:46.121874   26175 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:51:46.121894   26175 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:46.122261   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:46.122309   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:46.138213   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39295
	I0404 21:51:46.138617   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:46.139090   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:46.139110   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:46.139412   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:46.139588   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:51:46.142517   26175 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:46.143125   26175 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:46.143166   26175 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:46.143375   26175 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:46.143710   26175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:46.143753   26175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:46.158407   26175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44415
	I0404 21:51:46.158772   26175 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:46.159280   26175 main.go:141] libmachine: Using API Version  1
	I0404 21:51:46.159305   26175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:46.159604   26175 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:46.159795   26175 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:51:46.159975   26175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:46.159995   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:51:46.163186   26175 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:46.163624   26175 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:46.163652   26175 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:46.163793   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:51:46.163989   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:51:46.164241   26175 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:51:46.164365   26175 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:51:46.244203   26175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:46.259237   26175 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (3.72321349s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:49.226327   26271 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:49.226551   26271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:49.226569   26271 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:49.226578   26271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:49.226886   26271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:49.227144   26271 out.go:298] Setting JSON to false
	I0404 21:51:49.227181   26271 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:49.227244   26271 notify.go:220] Checking for updates...
	I0404 21:51:49.228712   26271 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:49.228767   26271 status.go:255] checking status of ha-454952 ...
	I0404 21:51:49.230133   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.230193   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.247378   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0404 21:51:49.247791   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.248400   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.248419   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.248834   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.249034   26271 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:49.250640   26271 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:49.250655   26271 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:49.250936   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.250970   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.265491   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0404 21:51:49.265870   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.266305   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.266324   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.266640   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.266838   26271 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:49.269533   26271 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:49.269941   26271 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:49.269968   26271 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:49.270108   26271 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:49.270470   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.270503   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.284955   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0404 21:51:49.285473   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.285932   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.285962   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.286244   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.286431   26271 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:49.286637   26271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:49.286662   26271 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:49.289422   26271 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:49.289791   26271 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:49.289823   26271 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:49.289943   26271 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:49.290128   26271 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:49.290285   26271 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:49.290431   26271 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:49.376797   26271 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:49.383629   26271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:49.399817   26271 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:49.399864   26271 api_server.go:166] Checking apiserver status ...
	I0404 21:51:49.399910   26271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:49.414456   26271 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:49.424483   26271 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:49.424540   26271 ssh_runner.go:195] Run: ls
	I0404 21:51:49.429136   26271 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:49.433515   26271 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:49.433536   26271 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:49.433546   26271 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:49.433562   26271 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:49.433845   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.433875   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.449739   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0404 21:51:49.450165   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.450651   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.450672   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.451006   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.451181   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:49.452868   26271 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:49.452884   26271 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:49.453209   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.453242   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.469004   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0404 21:51:49.469463   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.469929   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.469950   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.470283   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.470498   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:49.473632   26271 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:49.474065   26271 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:49.474114   26271 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:49.474263   26271 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:49.474669   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:49.474714   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:49.490949   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0404 21:51:49.491374   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:49.491855   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:49.491875   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:49.492226   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:49.492419   26271 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:49.492656   26271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:49.492675   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:49.495790   26271 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:49.496183   26271 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:49.496223   26271 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:49.496330   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:49.496546   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:49.496721   26271 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:49.496884   26271 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:51:52.548383   26271 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:51:52.548474   26271 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:51:52.548492   26271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:52.548502   26271 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:51:52.548530   26271 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:51:52.548540   26271 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:51:52.548869   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.548909   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.564770   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0404 21:51:52.565188   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.565713   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.565764   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.566107   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.566305   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:51:52.568274   26271 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:51:52.568288   26271 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:52.568556   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.568591   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.582796   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0404 21:51:52.583210   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.583741   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.583775   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.584161   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.584364   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:51:52.587558   26271 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:52.588055   26271 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:52.588075   26271 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:52.588281   26271 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:51:52.588575   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.588616   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.603454   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0404 21:51:52.603885   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.604405   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.604440   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.604734   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.604957   26271 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:51:52.605135   26271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:52.605164   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:51:52.607757   26271 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:52.608208   26271 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:51:52.608235   26271 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:51:52.608391   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:51:52.608557   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:51:52.608689   26271 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:51:52.608845   26271 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:51:52.688723   26271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:52.703495   26271 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:52.703528   26271 api_server.go:166] Checking apiserver status ...
	I0404 21:51:52.703565   26271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:52.717975   26271 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:51:52.729859   26271 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:52.729907   26271 ssh_runner.go:195] Run: ls
	I0404 21:51:52.734843   26271 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:52.739111   26271 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:52.739134   26271 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:51:52.739145   26271 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:52.739179   26271 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:51:52.739554   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.739595   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.754351   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0404 21:51:52.754767   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.755195   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.755218   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.755538   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.755740   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:51:52.757449   26271 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:51:52.757464   26271 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:52.757746   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.757779   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.772434   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0404 21:51:52.772912   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.773382   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.773400   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.773713   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.773863   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:51:52.777034   26271 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:52.777486   26271 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:52.777521   26271 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:52.777698   26271 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:51:52.777977   26271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:52.778017   26271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:52.792765   26271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0404 21:51:52.793180   26271 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:52.793584   26271 main.go:141] libmachine: Using API Version  1
	I0404 21:51:52.793605   26271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:52.793887   26271 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:52.794056   26271 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:51:52.794229   26271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:52.794248   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:51:52.796878   26271 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:52.797441   26271 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:51:52.797472   26271 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:51:52.797634   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:51:52.797791   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:51:52.797974   26271 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:51:52.798136   26271 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:51:52.880891   26271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:52.896103   26271 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (3.752722421s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:51:57.700300   26376 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:51:57.700434   26376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:57.700442   26376 out.go:304] Setting ErrFile to fd 2...
	I0404 21:51:57.700446   26376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:51:57.700798   26376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:51:57.701348   26376 out.go:298] Setting JSON to false
	I0404 21:51:57.701438   26376 mustload.go:65] Loading cluster: ha-454952
	I0404 21:51:57.701886   26376 notify.go:220] Checking for updates...
	I0404 21:51:57.702547   26376 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:51:57.702578   26376 status.go:255] checking status of ha-454952 ...
	I0404 21:51:57.703122   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.703192   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.718475   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0404 21:51:57.719022   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.719610   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.719636   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.720036   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.720257   26376 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:51:57.721853   26376 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:51:57.721866   26376 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:57.722155   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.722195   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.736957   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36377
	I0404 21:51:57.737354   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.737748   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.737766   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.738130   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.738298   26376 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:51:57.741100   26376 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:57.741512   26376 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:57.741555   26376 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:57.741631   26376 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:51:57.741994   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.742039   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.756959   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0404 21:51:57.757464   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.758026   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.758054   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.758393   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.758552   26376 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:51:57.758743   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:57.758791   26376 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:51:57.761367   26376 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:57.761840   26376 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:51:57.761875   26376 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:51:57.761975   26376 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:51:57.762150   26376 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:51:57.762275   26376 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:51:57.762423   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:51:57.848769   26376 ssh_runner.go:195] Run: systemctl --version
	I0404 21:51:57.855254   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:51:57.873200   26376 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:51:57.873254   26376 api_server.go:166] Checking apiserver status ...
	I0404 21:51:57.873315   26376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:51:57.890052   26376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:51:57.902584   26376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:51:57.902634   26376 ssh_runner.go:195] Run: ls
	I0404 21:51:57.907591   26376 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:51:57.912405   26376 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:51:57.912427   26376 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:51:57.912436   26376 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:51:57.912452   26376 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:51:57.912755   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.912790   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.928977   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0404 21:51:57.929332   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.929794   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.929817   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.930180   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.930340   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:51:57.932133   26376 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 21:51:57.932150   26376 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:57.932440   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.932472   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.947261   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0404 21:51:57.947662   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.948091   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.948143   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.948459   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.948643   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:51:57.951676   26376 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:57.952134   26376 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:57.952165   26376 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:57.952305   26376 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 21:51:57.952620   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:51:57.952665   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:51:57.967297   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0404 21:51:57.967674   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:51:57.968163   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:51:57.968188   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:51:57.968514   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:51:57.968681   26376 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:51:57.968877   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:51:57.968900   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:51:57.971280   26376 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:57.971656   26376 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:51:57.971685   26376 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:51:57.971820   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:51:57.972008   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:51:57.972171   26376 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:51:57.972303   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	W0404 21:52:01.028443   26376 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.60:22: connect: no route to host
	W0404 21:52:01.028531   26376 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0404 21:52:01.028553   26376 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:52:01.028566   26376 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0404 21:52:01.028691   26376 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	I0404 21:52:01.028724   26376 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:52:01.029170   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.029221   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.044604   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0404 21:52:01.045075   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.045542   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.045560   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.045945   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.046136   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:52:01.047619   26376 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:52:01.047632   26376 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:01.047900   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.047935   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.063143   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0404 21:52:01.063563   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.064025   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.064048   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.064413   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.064623   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:52:01.067162   26376 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:01.067537   26376 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:01.067571   26376 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:01.067719   26376 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:01.068060   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.068110   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.082517   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36975
	I0404 21:52:01.082939   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.083331   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.083353   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.083676   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.083860   26376 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:52:01.084061   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:01.084081   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:52:01.086758   26376 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:01.087217   26376 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:01.087240   26376 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:01.087379   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:52:01.087544   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:52:01.087724   26376 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:52:01.087867   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:52:01.177821   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:01.197466   26376 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:52:01.197492   26376 api_server.go:166] Checking apiserver status ...
	I0404 21:52:01.197526   26376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:52:01.217879   26376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:52:01.228499   26376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:52:01.228560   26376 ssh_runner.go:195] Run: ls
	I0404 21:52:01.233825   26376 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:52:01.238492   26376 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:52:01.238525   26376 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:52:01.238537   26376 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:01.238560   26376 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:52:01.238868   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.238902   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.253748   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0404 21:52:01.254227   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.254881   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.254909   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.255267   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.255458   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:52:01.257076   26376 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:52:01.257090   26376 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:01.257362   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.257396   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.273720   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0404 21:52:01.274192   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.274681   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.274703   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.274999   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.275212   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:52:01.278326   26376 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:01.278791   26376 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:01.278832   26376 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:01.278922   26376 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:01.279236   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:01.279269   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:01.294020   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0404 21:52:01.294416   26376 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:01.294854   26376 main.go:141] libmachine: Using API Version  1
	I0404 21:52:01.294873   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:01.295174   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:01.295340   26376 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:52:01.295520   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:01.295535   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:52:01.298125   26376 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:01.298565   26376 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:01.298590   26376 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:01.298735   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:52:01.298888   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:52:01.299047   26376 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:52:01.299181   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:52:01.381022   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:01.395915   26376 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 7 (647.021349ms)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:52:08.376847   26503 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:52:08.376979   26503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:08.376988   26503 out.go:304] Setting ErrFile to fd 2...
	I0404 21:52:08.376993   26503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:08.377175   26503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:52:08.377364   26503 out.go:298] Setting JSON to false
	I0404 21:52:08.377390   26503 mustload.go:65] Loading cluster: ha-454952
	I0404 21:52:08.377433   26503 notify.go:220] Checking for updates...
	I0404 21:52:08.377834   26503 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:52:08.377850   26503 status.go:255] checking status of ha-454952 ...
	I0404 21:52:08.378276   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.378353   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.394422   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0404 21:52:08.394920   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.395514   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.395561   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.395889   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.396096   26503 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:52:08.398330   26503 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:52:08.398351   26503 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:52:08.398652   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.398697   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.415125   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0404 21:52:08.415643   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.416089   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.416134   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.416500   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.416701   26503 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:52:08.419666   26503 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:08.420303   26503 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:52:08.420343   26503 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:08.420464   26503 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:52:08.420787   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.420827   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.435760   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I0404 21:52:08.436198   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.436683   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.436711   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.437063   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.437253   26503 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:52:08.437433   26503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:08.437466   26503 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:52:08.440468   26503 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:08.440985   26503 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:52:08.441017   26503 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:08.441235   26503 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:52:08.441448   26503 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:52:08.441623   26503 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:52:08.441739   26503 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:52:08.528225   26503 ssh_runner.go:195] Run: systemctl --version
	I0404 21:52:08.535174   26503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:08.552084   26503 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:52:08.552138   26503 api_server.go:166] Checking apiserver status ...
	I0404 21:52:08.552188   26503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:52:08.569755   26503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:52:08.580920   26503 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:52:08.580991   26503 ssh_runner.go:195] Run: ls
	I0404 21:52:08.586708   26503 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:52:08.593285   26503 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:52:08.593312   26503 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:52:08.593325   26503 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:08.593349   26503 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:52:08.593657   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.593699   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.609433   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0404 21:52:08.609856   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.610337   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.610354   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.610696   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.610956   26503 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:52:08.612898   26503 status.go:330] ha-454952-m02 host status = "Stopped" (err=<nil>)
	I0404 21:52:08.612912   26503 status.go:343] host is not running, skipping remaining checks
	I0404 21:52:08.612918   26503 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:08.612946   26503 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:52:08.613226   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.613262   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.628451   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0404 21:52:08.628978   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.629472   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.629493   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.629838   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.630063   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:52:08.631760   26503 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:52:08.631776   26503 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:08.632166   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.632212   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.646967   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0404 21:52:08.647402   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.647821   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.647847   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.648190   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.648419   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:52:08.651184   26503 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:08.651581   26503 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:08.651606   26503 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:08.651709   26503 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:08.651988   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.652026   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.667606   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I0404 21:52:08.668054   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.668543   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.668577   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.669028   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.669246   26503 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:52:08.669460   26503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:08.669481   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:52:08.672326   26503 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:08.672808   26503 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:08.672836   26503 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:08.673013   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:52:08.673173   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:52:08.673288   26503 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:52:08.673411   26503 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:52:08.756302   26503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:08.771912   26503 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:52:08.771937   26503 api_server.go:166] Checking apiserver status ...
	I0404 21:52:08.771974   26503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:52:08.787035   26503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:52:08.797699   26503 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:52:08.797747   26503 ssh_runner.go:195] Run: ls
	I0404 21:52:08.803582   26503 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:52:08.808075   26503 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:52:08.808099   26503 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:52:08.808108   26503 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:08.808140   26503 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:52:08.808414   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.808448   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.824047   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0404 21:52:08.824569   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.825001   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.825020   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.825324   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.825516   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:52:08.826964   26503 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:52:08.826979   26503 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:08.827267   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.827337   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.842274   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0404 21:52:08.842643   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.843233   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.843258   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.843519   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.843737   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:52:08.846489   26503 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:08.846963   26503 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:08.846995   26503 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:08.847087   26503 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:08.847367   26503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:08.847406   26503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:08.863322   26503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0404 21:52:08.863787   26503 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:08.864252   26503 main.go:141] libmachine: Using API Version  1
	I0404 21:52:08.864274   26503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:08.864607   26503 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:08.864805   26503 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:52:08.865056   26503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:08.865083   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:52:08.868019   26503 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:08.868469   26503 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:08.868512   26503 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:08.868660   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:52:08.868817   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:52:08.868951   26503 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:52:08.869102   26503 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:52:08.952091   26503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:08.968046   26503 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 7 (664.119047ms)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-454952-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:52:14.724201   26588 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:52:14.724450   26588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:14.724459   26588 out.go:304] Setting ErrFile to fd 2...
	I0404 21:52:14.724463   26588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:14.724631   26588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:52:14.724799   26588 out.go:298] Setting JSON to false
	I0404 21:52:14.724823   26588 mustload.go:65] Loading cluster: ha-454952
	I0404 21:52:14.724990   26588 notify.go:220] Checking for updates...
	I0404 21:52:14.725233   26588 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:52:14.725250   26588 status.go:255] checking status of ha-454952 ...
	I0404 21:52:14.725710   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.725789   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:14.741639   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0404 21:52:14.742303   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:14.743043   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:14.743064   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:14.743502   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:14.743696   26588 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:52:14.745448   26588 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 21:52:14.745471   26588 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:52:14.745771   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.745816   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:14.761493   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38039
	I0404 21:52:14.761925   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:14.762331   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:14.762353   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:14.762663   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:14.762841   26588 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:52:14.765754   26588 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:14.766223   26588 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:52:14.766258   26588 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:14.766406   26588 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:52:14.766758   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.766818   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:14.782345   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0404 21:52:14.782795   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:14.783332   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:14.783352   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:14.783678   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:14.783913   26588 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:52:14.784173   26588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:14.784202   26588 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:52:14.787205   26588 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:14.787614   26588 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:52:14.787647   26588 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:52:14.787826   26588 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:52:14.787995   26588 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:52:14.788148   26588 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:52:14.788296   26588 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:52:14.880998   26588 ssh_runner.go:195] Run: systemctl --version
	I0404 21:52:14.889639   26588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:14.908805   26588 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:52:14.908848   26588 api_server.go:166] Checking apiserver status ...
	I0404 21:52:14.908892   26588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:52:14.924953   26588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0404 21:52:14.937084   26588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:52:14.937168   26588 ssh_runner.go:195] Run: ls
	I0404 21:52:14.942385   26588 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:52:14.949557   26588 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:52:14.949583   26588 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 21:52:14.949595   26588 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:14.949614   26588 status.go:255] checking status of ha-454952-m02 ...
	I0404 21:52:14.949947   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.949989   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:14.968885   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0404 21:52:14.969407   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:14.969960   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:14.969985   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:14.970318   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:14.970538   26588 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:52:14.972057   26588 status.go:330] ha-454952-m02 host status = "Stopped" (err=<nil>)
	I0404 21:52:14.972072   26588 status.go:343] host is not running, skipping remaining checks
	I0404 21:52:14.972078   26588 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:14.972104   26588 status.go:255] checking status of ha-454952-m03 ...
	I0404 21:52:14.972433   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.972473   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:14.988218   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37523
	I0404 21:52:14.988653   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:14.989239   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:14.989261   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:14.989616   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:14.989851   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:52:14.991645   26588 status.go:330] ha-454952-m03 host status = "Running" (err=<nil>)
	I0404 21:52:14.991660   26588 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:14.992000   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:14.992038   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:15.007285   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0404 21:52:15.007738   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:15.008242   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:15.008267   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:15.008623   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:15.008816   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:52:15.011978   26588 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:15.012463   26588 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:15.012483   26588 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:15.012651   26588 host.go:66] Checking if "ha-454952-m03" exists ...
	I0404 21:52:15.012951   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:15.012986   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:15.028363   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0404 21:52:15.028997   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:15.029619   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:15.029644   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:15.030036   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:15.030376   26588 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:52:15.030553   26588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:15.030573   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:52:15.033922   26588 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:15.034457   26588 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:15.034491   26588 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:15.034653   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:52:15.034815   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:52:15.034962   26588 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:52:15.035107   26588 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:52:15.117003   26588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:15.133727   26588 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 21:52:15.133753   26588 api_server.go:166] Checking apiserver status ...
	I0404 21:52:15.133787   26588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:52:15.150530   26588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0404 21:52:15.161657   26588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 21:52:15.161708   26588 ssh_runner.go:195] Run: ls
	I0404 21:52:15.167072   26588 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 21:52:15.171509   26588 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 21:52:15.171533   26588 status.go:422] ha-454952-m03 apiserver status = Running (err=<nil>)
	I0404 21:52:15.171541   26588 status.go:257] ha-454952-m03 status: &{Name:ha-454952-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 21:52:15.171555   26588 status.go:255] checking status of ha-454952-m04 ...
	I0404 21:52:15.171840   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:15.171872   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:15.187209   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0404 21:52:15.187625   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:15.188017   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:15.188045   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:15.188386   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:15.188592   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:52:15.190136   26588 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 21:52:15.190153   26588 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:15.190433   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:15.190495   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:15.206554   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0404 21:52:15.207007   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:15.207494   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:15.207513   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:15.207835   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:15.208005   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 21:52:15.210865   26588 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:15.211310   26588 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:15.211337   26588 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:15.211457   26588 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 21:52:15.211746   26588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:15.211782   26588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:15.228077   26588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40741
	I0404 21:52:15.228497   26588 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:15.229018   26588 main.go:141] libmachine: Using API Version  1
	I0404 21:52:15.229046   26588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:15.229418   26588 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:15.229671   26588 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:52:15.229872   26588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 21:52:15.229906   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:52:15.233052   26588 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:15.233444   26588 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:15.233473   26588 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:15.233601   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:52:15.233759   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:52:15.233891   26588 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:52:15.234065   26588 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:52:15.316540   26588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:52:15.333035   26588 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-454952 -n ha-454952
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-454952 logs -n 25: (1.518821695s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m03_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m04 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp testdata/cp-test.txt                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m04_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03:/home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m03 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-454952 node stop m02 -v=7                                                     | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-454952 node start m02 -v=7                                                    | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:44:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:44:02.650394   21531 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:44:02.650607   21531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:44:02.650616   21531 out.go:304] Setting ErrFile to fd 2...
	I0404 21:44:02.650620   21531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:44:02.650826   21531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:44:02.651386   21531 out.go:298] Setting JSON to false
	I0404 21:44:02.652235   21531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1588,"bootTime":1712265455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:44:02.652297   21531 start.go:139] virtualization: kvm guest
	I0404 21:44:02.654291   21531 out.go:177] * [ha-454952] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:44:02.655636   21531 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:44:02.657036   21531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:44:02.655660   21531 notify.go:220] Checking for updates...
	I0404 21:44:02.659755   21531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:02.661170   21531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:02.662602   21531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:44:02.663918   21531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:44:02.665410   21531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:44:02.700312   21531 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 21:44:02.701877   21531 start.go:297] selected driver: kvm2
	I0404 21:44:02.701907   21531 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:44:02.701919   21531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:44:02.702602   21531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:44:02.702713   21531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:44:02.717645   21531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:44:02.717726   21531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:44:02.717927   21531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:44:02.717977   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:02.717988   21531 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0404 21:44:02.717993   21531 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0404 21:44:02.718036   21531 start.go:340] cluster config:
	{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0404 21:44:02.718119   21531 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:44:02.720241   21531 out.go:177] * Starting "ha-454952" primary control-plane node in "ha-454952" cluster
	I0404 21:44:02.721812   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:44:02.721859   21531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:44:02.721868   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:44:02.721945   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:44:02.721956   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:44:02.722293   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:44:02.722316   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json: {Name:mk4e70ee4269c9cb59f2948d042f0e4baab49cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:02.722443   21531 start.go:360] acquireMachinesLock for ha-454952: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:44:02.722477   21531 start.go:364] duration metric: took 21.698µs to acquireMachinesLock for "ha-454952"
	I0404 21:44:02.722496   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:44:02.722554   21531 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 21:44:02.724484   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:44:02.724632   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:02.724674   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:02.738825   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0404 21:44:02.739320   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:02.739884   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:02.739905   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:02.740267   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:02.740494   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:02.740655   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:02.740912   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:44:02.740964   21531 client.go:168] LocalClient.Create starting
	I0404 21:44:02.741006   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:44:02.741067   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:44:02.741092   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:44:02.741161   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:44:02.741187   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:44:02.741204   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:44:02.741228   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:44:02.741247   21531 main.go:141] libmachine: (ha-454952) Calling .PreCreateCheck
	I0404 21:44:02.741602   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:02.742069   21531 main.go:141] libmachine: Creating machine...
	I0404 21:44:02.742086   21531 main.go:141] libmachine: (ha-454952) Calling .Create
	I0404 21:44:02.742265   21531 main.go:141] libmachine: (ha-454952) Creating KVM machine...
	I0404 21:44:02.743630   21531 main.go:141] libmachine: (ha-454952) DBG | found existing default KVM network
	I0404 21:44:02.744377   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:02.744215   21554 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0404 21:44:02.744409   21531 main.go:141] libmachine: (ha-454952) DBG | created network xml: 
	I0404 21:44:02.744428   21531 main.go:141] libmachine: (ha-454952) DBG | <network>
	I0404 21:44:02.744438   21531 main.go:141] libmachine: (ha-454952) DBG |   <name>mk-ha-454952</name>
	I0404 21:44:02.744458   21531 main.go:141] libmachine: (ha-454952) DBG |   <dns enable='no'/>
	I0404 21:44:02.744468   21531 main.go:141] libmachine: (ha-454952) DBG |   
	I0404 21:44:02.744479   21531 main.go:141] libmachine: (ha-454952) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 21:44:02.744484   21531 main.go:141] libmachine: (ha-454952) DBG |     <dhcp>
	I0404 21:44:02.744492   21531 main.go:141] libmachine: (ha-454952) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 21:44:02.744498   21531 main.go:141] libmachine: (ha-454952) DBG |     </dhcp>
	I0404 21:44:02.744521   21531 main.go:141] libmachine: (ha-454952) DBG |   </ip>
	I0404 21:44:02.744545   21531 main.go:141] libmachine: (ha-454952) DBG |   
	I0404 21:44:02.744562   21531 main.go:141] libmachine: (ha-454952) DBG | </network>
	I0404 21:44:02.744575   21531 main.go:141] libmachine: (ha-454952) DBG | 
	I0404 21:44:02.749979   21531 main.go:141] libmachine: (ha-454952) DBG | trying to create private KVM network mk-ha-454952 192.168.39.0/24...
	I0404 21:44:02.815031   21531 main.go:141] libmachine: (ha-454952) DBG | private KVM network mk-ha-454952 192.168.39.0/24 created
	I0404 21:44:02.815062   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:02.815011   21554 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:02.815071   21531 main.go:141] libmachine: (ha-454952) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 ...
	I0404 21:44:02.815081   21531 main.go:141] libmachine: (ha-454952) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:44:02.815130   21531 main.go:141] libmachine: (ha-454952) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:44:03.040505   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.040387   21554 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa...
	I0404 21:44:03.155462   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.155291   21554 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/ha-454952.rawdisk...
	I0404 21:44:03.155494   21531 main.go:141] libmachine: (ha-454952) DBG | Writing magic tar header
	I0404 21:44:03.155508   21531 main.go:141] libmachine: (ha-454952) DBG | Writing SSH key tar header
	I0404 21:44:03.155519   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:03.155407   21554 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 ...
	I0404 21:44:03.155533   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952 (perms=drwx------)
	I0404 21:44:03.155547   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:44:03.155555   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:44:03.155562   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:44:03.155567   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:44:03.155575   21531 main.go:141] libmachine: (ha-454952) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:44:03.155581   21531 main.go:141] libmachine: (ha-454952) Creating domain...
	I0404 21:44:03.155616   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952
	I0404 21:44:03.155675   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:44:03.155693   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:44:03.155704   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:44:03.155737   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:44:03.155750   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:44:03.155764   21531 main.go:141] libmachine: (ha-454952) DBG | Checking permissions on dir: /home
	I0404 21:44:03.155783   21531 main.go:141] libmachine: (ha-454952) DBG | Skipping /home - not owner
	I0404 21:44:03.156871   21531 main.go:141] libmachine: (ha-454952) define libvirt domain using xml: 
	I0404 21:44:03.156895   21531 main.go:141] libmachine: (ha-454952) <domain type='kvm'>
	I0404 21:44:03.156903   21531 main.go:141] libmachine: (ha-454952)   <name>ha-454952</name>
	I0404 21:44:03.156908   21531 main.go:141] libmachine: (ha-454952)   <memory unit='MiB'>2200</memory>
	I0404 21:44:03.156914   21531 main.go:141] libmachine: (ha-454952)   <vcpu>2</vcpu>
	I0404 21:44:03.156919   21531 main.go:141] libmachine: (ha-454952)   <features>
	I0404 21:44:03.156924   21531 main.go:141] libmachine: (ha-454952)     <acpi/>
	I0404 21:44:03.156927   21531 main.go:141] libmachine: (ha-454952)     <apic/>
	I0404 21:44:03.156934   21531 main.go:141] libmachine: (ha-454952)     <pae/>
	I0404 21:44:03.156941   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.156950   21531 main.go:141] libmachine: (ha-454952)   </features>
	I0404 21:44:03.156959   21531 main.go:141] libmachine: (ha-454952)   <cpu mode='host-passthrough'>
	I0404 21:44:03.156968   21531 main.go:141] libmachine: (ha-454952)   
	I0404 21:44:03.156986   21531 main.go:141] libmachine: (ha-454952)   </cpu>
	I0404 21:44:03.156998   21531 main.go:141] libmachine: (ha-454952)   <os>
	I0404 21:44:03.157006   21531 main.go:141] libmachine: (ha-454952)     <type>hvm</type>
	I0404 21:44:03.157011   21531 main.go:141] libmachine: (ha-454952)     <boot dev='cdrom'/>
	I0404 21:44:03.157018   21531 main.go:141] libmachine: (ha-454952)     <boot dev='hd'/>
	I0404 21:44:03.157027   21531 main.go:141] libmachine: (ha-454952)     <bootmenu enable='no'/>
	I0404 21:44:03.157037   21531 main.go:141] libmachine: (ha-454952)   </os>
	I0404 21:44:03.157065   21531 main.go:141] libmachine: (ha-454952)   <devices>
	I0404 21:44:03.157090   21531 main.go:141] libmachine: (ha-454952)     <disk type='file' device='cdrom'>
	I0404 21:44:03.157110   21531 main.go:141] libmachine: (ha-454952)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/boot2docker.iso'/>
	I0404 21:44:03.157125   21531 main.go:141] libmachine: (ha-454952)       <target dev='hdc' bus='scsi'/>
	I0404 21:44:03.157139   21531 main.go:141] libmachine: (ha-454952)       <readonly/>
	I0404 21:44:03.157148   21531 main.go:141] libmachine: (ha-454952)     </disk>
	I0404 21:44:03.157157   21531 main.go:141] libmachine: (ha-454952)     <disk type='file' device='disk'>
	I0404 21:44:03.157165   21531 main.go:141] libmachine: (ha-454952)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:44:03.157174   21531 main.go:141] libmachine: (ha-454952)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/ha-454952.rawdisk'/>
	I0404 21:44:03.157183   21531 main.go:141] libmachine: (ha-454952)       <target dev='hda' bus='virtio'/>
	I0404 21:44:03.157191   21531 main.go:141] libmachine: (ha-454952)     </disk>
	I0404 21:44:03.157203   21531 main.go:141] libmachine: (ha-454952)     <interface type='network'>
	I0404 21:44:03.157216   21531 main.go:141] libmachine: (ha-454952)       <source network='mk-ha-454952'/>
	I0404 21:44:03.157227   21531 main.go:141] libmachine: (ha-454952)       <model type='virtio'/>
	I0404 21:44:03.157235   21531 main.go:141] libmachine: (ha-454952)     </interface>
	I0404 21:44:03.157243   21531 main.go:141] libmachine: (ha-454952)     <interface type='network'>
	I0404 21:44:03.157253   21531 main.go:141] libmachine: (ha-454952)       <source network='default'/>
	I0404 21:44:03.157261   21531 main.go:141] libmachine: (ha-454952)       <model type='virtio'/>
	I0404 21:44:03.157284   21531 main.go:141] libmachine: (ha-454952)     </interface>
	I0404 21:44:03.157306   21531 main.go:141] libmachine: (ha-454952)     <serial type='pty'>
	I0404 21:44:03.157325   21531 main.go:141] libmachine: (ha-454952)       <target port='0'/>
	I0404 21:44:03.157341   21531 main.go:141] libmachine: (ha-454952)     </serial>
	I0404 21:44:03.157350   21531 main.go:141] libmachine: (ha-454952)     <console type='pty'>
	I0404 21:44:03.157371   21531 main.go:141] libmachine: (ha-454952)       <target type='serial' port='0'/>
	I0404 21:44:03.157385   21531 main.go:141] libmachine: (ha-454952)     </console>
	I0404 21:44:03.157395   21531 main.go:141] libmachine: (ha-454952)     <rng model='virtio'>
	I0404 21:44:03.157409   21531 main.go:141] libmachine: (ha-454952)       <backend model='random'>/dev/random</backend>
	I0404 21:44:03.157419   21531 main.go:141] libmachine: (ha-454952)     </rng>
	I0404 21:44:03.157432   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.157439   21531 main.go:141] libmachine: (ha-454952)     
	I0404 21:44:03.157474   21531 main.go:141] libmachine: (ha-454952)   </devices>
	I0404 21:44:03.157491   21531 main.go:141] libmachine: (ha-454952) </domain>
	I0404 21:44:03.157502   21531 main.go:141] libmachine: (ha-454952) 
	I0404 21:44:03.161889   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:bd:22:8e in network default
	I0404 21:44:03.162497   21531 main.go:141] libmachine: (ha-454952) Ensuring networks are active...
	I0404 21:44:03.162516   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:03.163268   21531 main.go:141] libmachine: (ha-454952) Ensuring network default is active
	I0404 21:44:03.163590   21531 main.go:141] libmachine: (ha-454952) Ensuring network mk-ha-454952 is active
	I0404 21:44:03.164228   21531 main.go:141] libmachine: (ha-454952) Getting domain xml...
	I0404 21:44:03.165032   21531 main.go:141] libmachine: (ha-454952) Creating domain...
	I0404 21:44:04.361667   21531 main.go:141] libmachine: (ha-454952) Waiting to get IP...
	I0404 21:44:04.362712   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:04.363169   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:04.363190   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:04.363153   21554 retry.go:31] will retry after 295.412756ms: waiting for machine to come up
	I0404 21:44:04.660648   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:04.661103   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:04.661126   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:04.661058   21554 retry.go:31] will retry after 377.487782ms: waiting for machine to come up
	I0404 21:44:05.040684   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.041058   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.041090   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.041004   21554 retry.go:31] will retry after 338.171412ms: waiting for machine to come up
	I0404 21:44:05.380606   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.381050   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.381072   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.381020   21554 retry.go:31] will retry after 586.830945ms: waiting for machine to come up
	I0404 21:44:05.969744   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:05.970148   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:05.970182   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:05.970099   21554 retry.go:31] will retry after 507.958651ms: waiting for machine to come up
	I0404 21:44:06.479955   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:06.480413   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:06.480435   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:06.480362   21554 retry.go:31] will retry after 732.782622ms: waiting for machine to come up
	I0404 21:44:07.214391   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:07.214799   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:07.214843   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:07.214752   21554 retry.go:31] will retry after 1.155748181s: waiting for machine to come up
	I0404 21:44:08.373262   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:08.373700   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:08.373727   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:08.373649   21554 retry.go:31] will retry after 1.039318253s: waiting for machine to come up
	I0404 21:44:09.414830   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:09.415361   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:09.415391   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:09.415320   21554 retry.go:31] will retry after 1.419610359s: waiting for machine to come up
	I0404 21:44:10.836320   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:10.836872   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:10.836905   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:10.836729   21554 retry.go:31] will retry after 1.868110352s: waiting for machine to come up
	I0404 21:44:12.707917   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:12.708396   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:12.708423   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:12.708338   21554 retry.go:31] will retry after 1.901548289s: waiting for machine to come up
	I0404 21:44:14.611238   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:14.611713   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:14.611740   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:14.611667   21554 retry.go:31] will retry after 3.155171492s: waiting for machine to come up
	I0404 21:44:17.768546   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:17.769049   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:17.769076   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:17.769006   21554 retry.go:31] will retry after 4.202788757s: waiting for machine to come up
	I0404 21:44:21.976393   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:21.976825   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find current IP address of domain ha-454952 in network mk-ha-454952
	I0404 21:44:21.976889   21531 main.go:141] libmachine: (ha-454952) DBG | I0404 21:44:21.976804   21554 retry.go:31] will retry after 4.385711421s: waiting for machine to come up
	I0404 21:44:26.367198   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.367737   21531 main.go:141] libmachine: (ha-454952) Found IP for machine: 192.168.39.13
	I0404 21:44:26.367850   21531 main.go:141] libmachine: (ha-454952) Reserving static IP address...
	I0404 21:44:26.367871   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has current primary IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.368150   21531 main.go:141] libmachine: (ha-454952) DBG | unable to find host DHCP lease matching {name: "ha-454952", mac: "52:54:00:39:86:be", ip: "192.168.39.13"} in network mk-ha-454952
	I0404 21:44:26.441469   21531 main.go:141] libmachine: (ha-454952) DBG | Getting to WaitForSSH function...
	I0404 21:44:26.441503   21531 main.go:141] libmachine: (ha-454952) Reserved static IP address: 192.168.39.13
	I0404 21:44:26.441516   21531 main.go:141] libmachine: (ha-454952) Waiting for SSH to be available...
	I0404 21:44:26.444532   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.445011   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.445046   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.445188   21531 main.go:141] libmachine: (ha-454952) DBG | Using SSH client type: external
	I0404 21:44:26.445219   21531 main.go:141] libmachine: (ha-454952) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa (-rw-------)
	I0404 21:44:26.445265   21531 main.go:141] libmachine: (ha-454952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:44:26.445281   21531 main.go:141] libmachine: (ha-454952) DBG | About to run SSH command:
	I0404 21:44:26.445294   21531 main.go:141] libmachine: (ha-454952) DBG | exit 0
	I0404 21:44:26.576310   21531 main.go:141] libmachine: (ha-454952) DBG | SSH cmd err, output: <nil>: 
	I0404 21:44:26.576556   21531 main.go:141] libmachine: (ha-454952) KVM machine creation complete!
	I0404 21:44:26.576934   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:26.577438   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:26.577631   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:26.577815   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:44:26.577827   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:26.579195   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:44:26.579209   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:44:26.579215   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:44:26.579221   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.581224   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.581580   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.581607   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.581716   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.581897   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.582035   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.582188   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.582388   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.582583   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.582596   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:44:26.695872   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:44:26.695909   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:44:26.695925   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.698471   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.698852   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.698882   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.699019   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.699219   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.699376   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.699514   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.699684   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.699877   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.699891   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:44:26.813300   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:44:26.813393   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:44:26.813408   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:44:26.813423   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:26.813658   21531 buildroot.go:166] provisioning hostname "ha-454952"
	I0404 21:44:26.813678   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:26.813879   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.816475   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.816853   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.816873   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.817084   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.817246   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.817407   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.817572   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.817720   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.817879   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.817893   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952 && echo "ha-454952" | sudo tee /etc/hostname
	I0404 21:44:26.953477   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:44:26.953501   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:26.955918   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.956254   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:26.956281   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:26.956435   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:26.956605   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.956764   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:26.956900   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:26.957062   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:26.957268   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:26.957303   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:44:27.085734   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:44:27.085763   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:44:27.085800   21531 buildroot.go:174] setting up certificates
	I0404 21:44:27.085814   21531 provision.go:84] configureAuth start
	I0404 21:44:27.085826   21531 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:44:27.086102   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:27.088723   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.089070   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.089097   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.089278   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.091279   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.091540   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.091565   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.091736   21531 provision.go:143] copyHostCerts
	I0404 21:44:27.091762   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:44:27.091798   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:44:27.091807   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:44:27.091867   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:44:27.091933   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:44:27.091955   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:44:27.091962   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:44:27.091985   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:44:27.092021   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:44:27.092037   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:44:27.092043   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:44:27.092062   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:44:27.092101   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952 san=[127.0.0.1 192.168.39.13 ha-454952 localhost minikube]
	I0404 21:44:27.342904   21531 provision.go:177] copyRemoteCerts
	I0404 21:44:27.342956   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:44:27.342975   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.345785   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.346132   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.346166   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.346322   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.346522   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.346670   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.346786   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:27.440021   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:44:27.440096   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:44:27.469815   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:44:27.469870   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0404 21:44:27.496876   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:44:27.496932   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 21:44:27.524389   21531 provision.go:87] duration metric: took 438.562222ms to configureAuth
	I0404 21:44:27.524411   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:44:27.524565   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:44:27.524631   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.527186   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.527530   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.527550   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.527750   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.527913   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.528041   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.528174   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.528313   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:27.528464   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:27.528478   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:44:27.811117   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:44:27.811149   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:44:27.811159   21531 main.go:141] libmachine: (ha-454952) Calling .GetURL
	I0404 21:44:27.812329   21531 main.go:141] libmachine: (ha-454952) DBG | Using libvirt version 6000000
	I0404 21:44:27.814505   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.814878   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.814905   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.815034   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:44:27.815050   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:44:27.815058   21531 client.go:171] duration metric: took 25.07408183s to LocalClient.Create
	I0404 21:44:27.815077   21531 start.go:167] duration metric: took 25.074167258s to libmachine.API.Create "ha-454952"
	I0404 21:44:27.815085   21531 start.go:293] postStartSetup for "ha-454952" (driver="kvm2")
	I0404 21:44:27.815094   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:44:27.815115   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:27.815309   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:44:27.815328   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.817163   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.817438   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.817461   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.817634   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.817783   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.817942   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.818039   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:27.906609   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:44:27.911083   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:44:27.911100   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:44:27.911174   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:44:27.911268   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:44:27.911282   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:44:27.911417   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:44:27.921755   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:44:27.946614   21531 start.go:296] duration metric: took 131.516007ms for postStartSetup
	I0404 21:44:27.946659   21531 main.go:141] libmachine: (ha-454952) Calling .GetConfigRaw
	I0404 21:44:27.947234   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:27.949891   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.950293   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.950327   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.950485   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:44:27.950675   21531 start.go:128] duration metric: took 25.228112122s to createHost
	I0404 21:44:27.950701   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:27.953337   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.953692   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:27.953710   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:27.953840   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:27.953986   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.954127   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:27.954248   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:27.954409   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:44:27.954572   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:44:27.954590   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:44:28.069250   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267068.043455500
	
	I0404 21:44:28.069276   21531 fix.go:216] guest clock: 1712267068.043455500
	I0404 21:44:28.069283   21531 fix.go:229] Guest: 2024-04-04 21:44:28.0434555 +0000 UTC Remote: 2024-04-04 21:44:27.950687712 +0000 UTC m=+25.347320907 (delta=92.767788ms)
	I0404 21:44:28.069302   21531 fix.go:200] guest clock delta is within tolerance: 92.767788ms
	I0404 21:44:28.069307   21531 start.go:83] releasing machines lock for "ha-454952", held for 25.346821713s
	I0404 21:44:28.069325   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.069571   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:28.072197   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.072579   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.072605   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.072752   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073339   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073505   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:28.073602   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:44:28.073641   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:28.073650   21531 ssh_runner.go:195] Run: cat /version.json
	I0404 21:44:28.073662   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:28.075990   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076324   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.076352   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076376   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076506   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:28.076679   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:28.076791   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:28.076817   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:28.076840   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:28.076948   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:28.077018   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:28.077110   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:28.077250   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:28.077420   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:28.161876   21531 ssh_runner.go:195] Run: systemctl --version
	I0404 21:44:28.196517   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:44:28.365546   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:44:28.371823   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:44:28.371886   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:44:28.389245   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:44:28.389266   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:44:28.389343   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:44:28.408113   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:44:28.425185   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:44:28.425234   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:44:28.440355   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:44:28.456055   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:44:28.579016   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:44:28.730964   21531 docker.go:233] disabling docker service ...
	I0404 21:44:28.731038   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:44:28.747024   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:44:28.760738   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:44:28.894085   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:44:29.037863   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:44:29.053162   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:44:29.072981   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:44:29.073044   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.084318   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:44:29.084391   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.095696   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.106440   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.117716   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:44:29.129015   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.139990   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.158444   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:44:29.171998   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:44:29.183910   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:44:29.183971   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:44:29.199116   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:44:29.210129   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:44:29.340830   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:44:29.494180   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:44:29.494265   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:44:29.500266   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:44:29.500352   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:44:29.504228   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:44:29.545448   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:44:29.545540   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:44:29.575479   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:44:29.608745   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:44:29.610316   21531 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:44:29.612701   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:29.612985   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:29.613010   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:29.613173   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:44:29.617489   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:44:29.631869   21531 kubeadm.go:877] updating cluster {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 21:44:29.631987   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:44:29.632032   21531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:44:29.667707   21531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 21:44:29.667791   21531 ssh_runner.go:195] Run: which lz4
	I0404 21:44:29.672037   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0404 21:44:29.672145   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 21:44:29.676449   21531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 21:44:29.676475   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 21:44:31.207161   21531 crio.go:462] duration metric: took 1.535055588s to copy over tarball
	I0404 21:44:31.207271   21531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 21:44:33.536211   21531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.328913592s)
	I0404 21:44:33.536247   21531 crio.go:469] duration metric: took 2.329050777s to extract the tarball
	I0404 21:44:33.536256   21531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 21:44:33.575332   21531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:44:33.623579   21531 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:44:33.623604   21531 cache_images.go:84] Images are preloaded, skipping loading
	I0404 21:44:33.623613   21531 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.29.3 crio true true} ...
	I0404 21:44:33.623744   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:44:33.623819   21531 ssh_runner.go:195] Run: crio config
	I0404 21:44:33.672380   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:33.672404   21531 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0404 21:44:33.672414   21531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 21:44:33.672434   21531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-454952 NodeName:ha-454952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 21:44:33.672583   21531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-454952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 21:44:33.672613   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:44:33.672662   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:44:33.692154   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:44:33.692294   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:44:33.692360   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:44:33.706668   21531 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 21:44:33.706753   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0404 21:44:33.719047   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0404 21:44:33.738743   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:44:33.759868   21531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0404 21:44:33.780371   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0404 21:44:33.799501   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:44:33.803857   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:44:33.816893   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:44:33.944901   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:44:33.963225   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.13
	I0404 21:44:33.963277   21531 certs.go:194] generating shared ca certs ...
	I0404 21:44:33.963295   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:33.963454   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:44:33.963514   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:44:33.963527   21531 certs.go:256] generating profile certs ...
	I0404 21:44:33.963592   21531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:44:33.963610   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt with IP's: []
	I0404 21:44:34.310349   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt ...
	I0404 21:44:34.310378   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt: {Name:mk842cef776f49e0c375e16a164e1b4ec24172f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.310568   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key ...
	I0404 21:44:34.310583   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key: {Name:mk2d8b7056432b32bc7806de3137cd82157befd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.310685   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e
	I0404 21:44:34.310702   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.254]
	I0404 21:44:34.519722   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e ...
	I0404 21:44:34.519746   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e: {Name:mkfae809a19680d483855c0b76ce3d3985f98122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.519896   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e ...
	I0404 21:44:34.519913   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e: {Name:mk6d2209e949a7d3510c9ad4e0a6814435e4ca2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.520005   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.f401904e -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:44:34.520079   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.f401904e -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:44:34.520163   21531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:44:34.520183   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt with IP's: []
	I0404 21:44:34.629377   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt ...
	I0404 21:44:34.629412   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt: {Name:mkecf129b5a1480677134f643f060ec7d6af66af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.629609   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key ...
	I0404 21:44:34.629626   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key: {Name:mkde4a9612453c27dcf447317eaa0c633a0f5e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:34.629734   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:44:34.629755   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:44:34.629767   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:44:34.629780   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:44:34.629791   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:44:34.629807   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:44:34.629821   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:44:34.629836   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:44:34.629893   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:44:34.629939   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:44:34.629948   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:44:34.629977   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:44:34.630002   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:44:34.630026   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:44:34.630066   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:44:34.630101   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:44:34.630118   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.630130   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:44:34.631167   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:44:34.663439   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:44:34.689465   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:44:34.714884   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:44:34.745497   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 21:44:34.791169   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 21:44:34.828644   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:44:34.853978   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:44:34.878857   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:44:34.903967   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:44:34.929361   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:44:34.955370   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 21:44:34.973332   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:44:34.979428   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:44:34.991663   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.996625   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:34.996685   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:44:35.002750   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:44:35.015463   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:44:35.027354   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.031938   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.031984   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:44:35.037666   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:44:35.049548   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:44:35.061358   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.066041   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.066106   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:44:35.071886   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:44:35.084199   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:44:35.088572   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:44:35.088630   21531 kubeadm.go:391] StartCluster: {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:44:35.088727   21531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 21:44:35.088799   21531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 21:44:35.128476   21531 cri.go:89] found id: ""
	I0404 21:44:35.128549   21531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 21:44:35.139591   21531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 21:44:35.150620   21531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 21:44:35.161410   21531 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 21:44:35.161438   21531 kubeadm.go:156] found existing configuration files:
	
	I0404 21:44:35.161491   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 21:44:35.171678   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 21:44:35.171750   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 21:44:35.182280   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 21:44:35.192492   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 21:44:35.192563   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 21:44:35.203920   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 21:44:35.214551   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 21:44:35.214613   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 21:44:35.225542   21531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 21:44:35.236489   21531 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 21:44:35.236546   21531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 21:44:35.247545   21531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 21:44:35.504554   21531 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 21:44:46.667176   21531 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 21:44:46.667234   21531 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 21:44:46.667375   21531 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 21:44:46.667503   21531 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 21:44:46.667627   21531 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 21:44:46.667730   21531 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 21:44:46.669460   21531 out.go:204]   - Generating certificates and keys ...
	I0404 21:44:46.669539   21531 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 21:44:46.669638   21531 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 21:44:46.669740   21531 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0404 21:44:46.669825   21531 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0404 21:44:46.669917   21531 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0404 21:44:46.669994   21531 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0404 21:44:46.670082   21531 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0404 21:44:46.670236   21531 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-454952 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0404 21:44:46.670325   21531 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0404 21:44:46.670485   21531 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-454952 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0404 21:44:46.670568   21531 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0404 21:44:46.670647   21531 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0404 21:44:46.670711   21531 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0404 21:44:46.670783   21531 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 21:44:46.670856   21531 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 21:44:46.670938   21531 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 21:44:46.671013   21531 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 21:44:46.671182   21531 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 21:44:46.671272   21531 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 21:44:46.671392   21531 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 21:44:46.671493   21531 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 21:44:46.673545   21531 out.go:204]   - Booting up control plane ...
	I0404 21:44:46.673639   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 21:44:46.673722   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 21:44:46.673816   21531 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 21:44:46.673934   21531 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 21:44:46.674051   21531 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 21:44:46.674096   21531 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 21:44:46.674325   21531 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 21:44:46.674425   21531 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.621425 seconds
	I0404 21:44:46.674537   21531 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 21:44:46.674714   21531 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 21:44:46.674816   21531 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 21:44:46.675058   21531 kubeadm.go:309] [mark-control-plane] Marking the node ha-454952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 21:44:46.675118   21531 kubeadm.go:309] [bootstrap-token] Using token: ya8q6p.186cu33hp9v28qqx
	I0404 21:44:46.676247   21531 out.go:204]   - Configuring RBAC rules ...
	I0404 21:44:46.676368   21531 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 21:44:46.676473   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 21:44:46.676646   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 21:44:46.676803   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 21:44:46.676909   21531 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 21:44:46.677028   21531 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 21:44:46.677139   21531 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 21:44:46.677190   21531 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 21:44:46.677232   21531 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 21:44:46.677239   21531 kubeadm.go:309] 
	I0404 21:44:46.677286   21531 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 21:44:46.677292   21531 kubeadm.go:309] 
	I0404 21:44:46.677367   21531 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 21:44:46.677377   21531 kubeadm.go:309] 
	I0404 21:44:46.677398   21531 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 21:44:46.677448   21531 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 21:44:46.677492   21531 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 21:44:46.677503   21531 kubeadm.go:309] 
	I0404 21:44:46.677566   21531 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 21:44:46.677577   21531 kubeadm.go:309] 
	I0404 21:44:46.677631   21531 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 21:44:46.677638   21531 kubeadm.go:309] 
	I0404 21:44:46.677707   21531 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 21:44:46.677819   21531 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 21:44:46.677917   21531 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 21:44:46.677936   21531 kubeadm.go:309] 
	I0404 21:44:46.678032   21531 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 21:44:46.678134   21531 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 21:44:46.678144   21531 kubeadm.go:309] 
	I0404 21:44:46.678235   21531 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ya8q6p.186cu33hp9v28qqx \
	I0404 21:44:46.678334   21531 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 21:44:46.678355   21531 kubeadm.go:309] 	--control-plane 
	I0404 21:44:46.678364   21531 kubeadm.go:309] 
	I0404 21:44:46.678446   21531 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 21:44:46.678455   21531 kubeadm.go:309] 
	I0404 21:44:46.678554   21531 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ya8q6p.186cu33hp9v28qqx \
	I0404 21:44:46.678698   21531 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 21:44:46.678710   21531 cni.go:84] Creating CNI manager for ""
	I0404 21:44:46.678717   21531 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0404 21:44:46.680306   21531 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0404 21:44:46.681691   21531 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0404 21:44:46.701404   21531 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0404 21:44:46.701421   21531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0404 21:44:46.761476   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0404 21:44:47.161763   21531 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 21:44:47.161842   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:47.161899   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952 minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=true
	I0404 21:44:47.186182   21531 ops.go:34] apiserver oom_adj: -16
	I0404 21:44:47.319261   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:47.819595   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:48.320189   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:48.819327   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:49.319704   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:49.819463   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:50.320026   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:50.819391   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:51.320092   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:51.819953   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:52.319560   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:52.819983   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:53.320054   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:53.820167   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:54.320322   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:54.819637   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:55.320325   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:55.820153   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:56.319602   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:56.820208   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:57.319911   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:57.820284   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.319665   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.819575   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 21:44:58.947107   21531 kubeadm.go:1107] duration metric: took 11.785322233s to wait for elevateKubeSystemPrivileges
	W0404 21:44:58.947153   21531 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 21:44:58.947161   21531 kubeadm.go:393] duration metric: took 23.858536385s to StartCluster
	I0404 21:44:58.947176   21531 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:58.947256   21531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:58.947885   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:44:58.948108   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0404 21:44:58.948112   21531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:44:58.948221   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:44:58.948208   21531 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 21:44:58.948307   21531 addons.go:69] Setting storage-provisioner=true in profile "ha-454952"
	I0404 21:44:58.948331   21531 addons.go:69] Setting default-storageclass=true in profile "ha-454952"
	I0404 21:44:58.948369   21531 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-454952"
	I0404 21:44:58.948332   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:44:58.948340   21531 addons.go:234] Setting addon storage-provisioner=true in "ha-454952"
	I0404 21:44:58.948515   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:44:58.948729   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.948783   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.948901   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.948930   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.964231   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0404 21:44:58.964253   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0404 21:44:58.964663   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.964666   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.965156   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.965174   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.965313   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.965326   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.965551   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.965660   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.965852   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:58.966116   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.966163   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.967828   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:44:58.968082   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0404 21:44:58.968559   21531 cert_rotation.go:137] Starting client certificate rotation controller
	I0404 21:44:58.968655   21531 addons.go:234] Setting addon default-storageclass=true in "ha-454952"
	I0404 21:44:58.968703   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:44:58.968954   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.968996   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.982282   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0404 21:44:58.982824   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.983345   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.983373   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.983666   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0404 21:44:58.983760   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.983932   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:58.984177   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:58.984682   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:58.984704   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:58.985051   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:58.985543   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:44:58.985564   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:44:58.985708   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:58.987846   21531 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 21:44:58.989702   21531 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:44:58.989725   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 21:44:58.989745   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:58.993077   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:58.993503   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:58.993536   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:58.993716   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:58.993907   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:58.994082   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:58.994247   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:59.001411   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0404 21:44:59.001821   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:44:59.002254   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:44:59.002277   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:44:59.002574   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:44:59.002769   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:44:59.004536   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:44:59.004826   21531 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 21:44:59.004843   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 21:44:59.004860   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:44:59.007454   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:59.007846   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:44:59.007873   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:44:59.007997   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:44:59.008163   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:44:59.008303   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:44:59.008456   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:44:59.261455   21531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 21:44:59.273206   21531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 21:44:59.292694   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0404 21:44:59.678008   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.678036   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.678529   21531 main.go:141] libmachine: (ha-454952) DBG | Closing plugin on server side
	I0404 21:44:59.678531   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.678550   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:44:59.678559   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.678573   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.678833   21531 main.go:141] libmachine: (ha-454952) DBG | Closing plugin on server side
	I0404 21:44:59.678865   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.678880   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:44:59.679028   21531 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0404 21:44:59.679039   21531 round_trippers.go:469] Request Headers:
	I0404 21:44:59.679050   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:44:59.679058   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:44:59.690493   21531 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0404 21:44:59.691028   21531 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0404 21:44:59.691046   21531 round_trippers.go:469] Request Headers:
	I0404 21:44:59.691055   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:44:59.691061   21531 round_trippers.go:473]     Content-Type: application/json
	I0404 21:44:59.691065   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:44:59.695796   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:44:59.695975   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:44:59.695991   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:44:59.696288   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:44:59.696304   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.266307   21531 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0404 21:45:00.266540   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:45:00.266558   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:45:00.266863   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:45:00.266878   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.266887   21531 main.go:141] libmachine: Making call to close driver server
	I0404 21:45:00.266896   21531 main.go:141] libmachine: (ha-454952) Calling .Close
	I0404 21:45:00.267117   21531 main.go:141] libmachine: Successfully made call to close driver server
	I0404 21:45:00.267134   21531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 21:45:00.269215   21531 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0404 21:45:00.270610   21531 addons.go:505] duration metric: took 1.322405012s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0404 21:45:00.270652   21531 start.go:245] waiting for cluster config update ...
	I0404 21:45:00.270671   21531 start.go:254] writing updated cluster config ...
	I0404 21:45:00.272755   21531 out.go:177] 
	I0404 21:45:00.274535   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:00.274629   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:00.276821   21531 out.go:177] * Starting "ha-454952-m02" control-plane node in "ha-454952" cluster
	I0404 21:45:00.278381   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:45:00.278414   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:45:00.278519   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:45:00.278534   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:45:00.278636   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:00.278871   21531 start.go:360] acquireMachinesLock for ha-454952-m02: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:45:00.278932   21531 start.go:364] duration metric: took 35.093µs to acquireMachinesLock for "ha-454952-m02"
	I0404 21:45:00.278961   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:45:00.279049   21531 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0404 21:45:00.281049   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:45:00.281152   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:00.281186   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:00.300272   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0404 21:45:00.300765   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:00.301274   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:00.301300   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:00.301631   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:00.301871   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:00.302006   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:00.302148   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:45:00.302167   21531 client.go:168] LocalClient.Create starting
	I0404 21:45:00.302193   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:45:00.302224   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:45:00.302239   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:45:00.302301   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:45:00.302328   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:45:00.302346   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:45:00.302372   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:45:00.302388   21531 main.go:141] libmachine: (ha-454952-m02) Calling .PreCreateCheck
	I0404 21:45:00.302550   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:00.302938   21531 main.go:141] libmachine: Creating machine...
	I0404 21:45:00.302954   21531 main.go:141] libmachine: (ha-454952-m02) Calling .Create
	I0404 21:45:00.303078   21531 main.go:141] libmachine: (ha-454952-m02) Creating KVM machine...
	I0404 21:45:00.304163   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found existing default KVM network
	I0404 21:45:00.304281   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found existing private KVM network mk-ha-454952
	I0404 21:45:00.304509   21531 main.go:141] libmachine: (ha-454952-m02) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 ...
	I0404 21:45:00.304535   21531 main.go:141] libmachine: (ha-454952-m02) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:45:00.304576   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.304477   21881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:45:00.304686   21531 main.go:141] libmachine: (ha-454952-m02) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:45:00.523864   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.523736   21881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa...
	I0404 21:45:00.584744   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.584610   21881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/ha-454952-m02.rawdisk...
	I0404 21:45:00.584777   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Writing magic tar header
	I0404 21:45:00.584788   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Writing SSH key tar header
	I0404 21:45:00.584799   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:00.584730   21881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 ...
	I0404 21:45:00.584880   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02
	I0404 21:45:00.584917   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02 (perms=drwx------)
	I0404 21:45:00.584947   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:45:00.584963   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:45:00.584978   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:45:00.584991   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:45:00.585005   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:45:00.585018   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:45:00.585030   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:45:00.585042   21531 main.go:141] libmachine: (ha-454952-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:45:00.585057   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:45:00.585070   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:45:00.585081   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Checking permissions on dir: /home
	I0404 21:45:00.585099   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Skipping /home - not owner
	I0404 21:45:00.585111   21531 main.go:141] libmachine: (ha-454952-m02) Creating domain...
	I0404 21:45:00.586000   21531 main.go:141] libmachine: (ha-454952-m02) define libvirt domain using xml: 
	I0404 21:45:00.586027   21531 main.go:141] libmachine: (ha-454952-m02) <domain type='kvm'>
	I0404 21:45:00.586038   21531 main.go:141] libmachine: (ha-454952-m02)   <name>ha-454952-m02</name>
	I0404 21:45:00.586048   21531 main.go:141] libmachine: (ha-454952-m02)   <memory unit='MiB'>2200</memory>
	I0404 21:45:00.586060   21531 main.go:141] libmachine: (ha-454952-m02)   <vcpu>2</vcpu>
	I0404 21:45:00.586069   21531 main.go:141] libmachine: (ha-454952-m02)   <features>
	I0404 21:45:00.586077   21531 main.go:141] libmachine: (ha-454952-m02)     <acpi/>
	I0404 21:45:00.586088   21531 main.go:141] libmachine: (ha-454952-m02)     <apic/>
	I0404 21:45:00.586097   21531 main.go:141] libmachine: (ha-454952-m02)     <pae/>
	I0404 21:45:00.586114   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586122   21531 main.go:141] libmachine: (ha-454952-m02)   </features>
	I0404 21:45:00.586128   21531 main.go:141] libmachine: (ha-454952-m02)   <cpu mode='host-passthrough'>
	I0404 21:45:00.586135   21531 main.go:141] libmachine: (ha-454952-m02)   
	I0404 21:45:00.586140   21531 main.go:141] libmachine: (ha-454952-m02)   </cpu>
	I0404 21:45:00.586151   21531 main.go:141] libmachine: (ha-454952-m02)   <os>
	I0404 21:45:00.586159   21531 main.go:141] libmachine: (ha-454952-m02)     <type>hvm</type>
	I0404 21:45:00.586172   21531 main.go:141] libmachine: (ha-454952-m02)     <boot dev='cdrom'/>
	I0404 21:45:00.586184   21531 main.go:141] libmachine: (ha-454952-m02)     <boot dev='hd'/>
	I0404 21:45:00.586199   21531 main.go:141] libmachine: (ha-454952-m02)     <bootmenu enable='no'/>
	I0404 21:45:00.586209   21531 main.go:141] libmachine: (ha-454952-m02)   </os>
	I0404 21:45:00.586216   21531 main.go:141] libmachine: (ha-454952-m02)   <devices>
	I0404 21:45:00.586227   21531 main.go:141] libmachine: (ha-454952-m02)     <disk type='file' device='cdrom'>
	I0404 21:45:00.586242   21531 main.go:141] libmachine: (ha-454952-m02)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/boot2docker.iso'/>
	I0404 21:45:00.586256   21531 main.go:141] libmachine: (ha-454952-m02)       <target dev='hdc' bus='scsi'/>
	I0404 21:45:00.586269   21531 main.go:141] libmachine: (ha-454952-m02)       <readonly/>
	I0404 21:45:00.586276   21531 main.go:141] libmachine: (ha-454952-m02)     </disk>
	I0404 21:45:00.586286   21531 main.go:141] libmachine: (ha-454952-m02)     <disk type='file' device='disk'>
	I0404 21:45:00.586295   21531 main.go:141] libmachine: (ha-454952-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:45:00.586309   21531 main.go:141] libmachine: (ha-454952-m02)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/ha-454952-m02.rawdisk'/>
	I0404 21:45:00.586324   21531 main.go:141] libmachine: (ha-454952-m02)       <target dev='hda' bus='virtio'/>
	I0404 21:45:00.586336   21531 main.go:141] libmachine: (ha-454952-m02)     </disk>
	I0404 21:45:00.586347   21531 main.go:141] libmachine: (ha-454952-m02)     <interface type='network'>
	I0404 21:45:00.586357   21531 main.go:141] libmachine: (ha-454952-m02)       <source network='mk-ha-454952'/>
	I0404 21:45:00.586366   21531 main.go:141] libmachine: (ha-454952-m02)       <model type='virtio'/>
	I0404 21:45:00.586372   21531 main.go:141] libmachine: (ha-454952-m02)     </interface>
	I0404 21:45:00.586383   21531 main.go:141] libmachine: (ha-454952-m02)     <interface type='network'>
	I0404 21:45:00.586409   21531 main.go:141] libmachine: (ha-454952-m02)       <source network='default'/>
	I0404 21:45:00.586434   21531 main.go:141] libmachine: (ha-454952-m02)       <model type='virtio'/>
	I0404 21:45:00.586440   21531 main.go:141] libmachine: (ha-454952-m02)     </interface>
	I0404 21:45:00.586445   21531 main.go:141] libmachine: (ha-454952-m02)     <serial type='pty'>
	I0404 21:45:00.586454   21531 main.go:141] libmachine: (ha-454952-m02)       <target port='0'/>
	I0404 21:45:00.586459   21531 main.go:141] libmachine: (ha-454952-m02)     </serial>
	I0404 21:45:00.586467   21531 main.go:141] libmachine: (ha-454952-m02)     <console type='pty'>
	I0404 21:45:00.586472   21531 main.go:141] libmachine: (ha-454952-m02)       <target type='serial' port='0'/>
	I0404 21:45:00.586482   21531 main.go:141] libmachine: (ha-454952-m02)     </console>
	I0404 21:45:00.586488   21531 main.go:141] libmachine: (ha-454952-m02)     <rng model='virtio'>
	I0404 21:45:00.586502   21531 main.go:141] libmachine: (ha-454952-m02)       <backend model='random'>/dev/random</backend>
	I0404 21:45:00.586506   21531 main.go:141] libmachine: (ha-454952-m02)     </rng>
	I0404 21:45:00.586512   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586518   21531 main.go:141] libmachine: (ha-454952-m02)     
	I0404 21:45:00.586523   21531 main.go:141] libmachine: (ha-454952-m02)   </devices>
	I0404 21:45:00.586530   21531 main.go:141] libmachine: (ha-454952-m02) </domain>
	I0404 21:45:00.586537   21531 main.go:141] libmachine: (ha-454952-m02) 
	I0404 21:45:00.593877   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:31:ab:5e in network default
	I0404 21:45:00.594406   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring networks are active...
	I0404 21:45:00.594436   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:00.595200   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring network default is active
	I0404 21:45:00.595569   21531 main.go:141] libmachine: (ha-454952-m02) Ensuring network mk-ha-454952 is active
	I0404 21:45:00.595893   21531 main.go:141] libmachine: (ha-454952-m02) Getting domain xml...
	I0404 21:45:00.596623   21531 main.go:141] libmachine: (ha-454952-m02) Creating domain...
	I0404 21:45:01.877660   21531 main.go:141] libmachine: (ha-454952-m02) Waiting to get IP...
	I0404 21:45:01.878698   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:01.879348   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:01.879380   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:01.879298   21881 retry.go:31] will retry after 236.231853ms: waiting for machine to come up
	I0404 21:45:02.116876   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.117407   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.117443   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.117367   21881 retry.go:31] will retry after 269.603826ms: waiting for machine to come up
	I0404 21:45:02.388837   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.389285   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.389332   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.389269   21881 retry.go:31] will retry after 383.378459ms: waiting for machine to come up
	I0404 21:45:02.773722   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:02.774204   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:02.774253   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:02.774161   21881 retry.go:31] will retry after 505.464099ms: waiting for machine to come up
	I0404 21:45:03.281604   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:03.282114   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:03.282161   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:03.282049   21881 retry.go:31] will retry after 616.997067ms: waiting for machine to come up
	I0404 21:45:03.900883   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:03.901343   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:03.901380   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:03.901291   21881 retry.go:31] will retry after 877.843112ms: waiting for machine to come up
	I0404 21:45:04.780474   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:04.780847   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:04.780886   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:04.780811   21881 retry.go:31] will retry after 961.213944ms: waiting for machine to come up
	I0404 21:45:05.743296   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:05.743781   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:05.743810   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:05.743730   21881 retry.go:31] will retry after 982.805613ms: waiting for machine to come up
	I0404 21:45:06.727769   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:06.728425   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:06.728463   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:06.728379   21881 retry.go:31] will retry after 1.304521252s: waiting for machine to come up
	I0404 21:45:08.034126   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:08.034548   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:08.034574   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:08.034510   21881 retry.go:31] will retry after 1.73753848s: waiting for machine to come up
	I0404 21:45:09.773381   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:09.773993   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:09.774031   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:09.773950   21881 retry.go:31] will retry after 2.161610241s: waiting for machine to come up
	I0404 21:45:11.937792   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:11.938364   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:11.938389   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:11.938322   21881 retry.go:31] will retry after 3.446680064s: waiting for machine to come up
	I0404 21:45:15.386967   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:15.387421   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:15.387443   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:15.387365   21881 retry.go:31] will retry after 3.966828686s: waiting for machine to come up
	I0404 21:45:19.358507   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:19.358967   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find current IP address of domain ha-454952-m02 in network mk-ha-454952
	I0404 21:45:19.358988   21531 main.go:141] libmachine: (ha-454952-m02) DBG | I0404 21:45:19.358931   21881 retry.go:31] will retry after 4.138996074s: waiting for machine to come up
	I0404 21:45:23.501644   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.502178   21531 main.go:141] libmachine: (ha-454952-m02) Found IP for machine: 192.168.39.60
	I0404 21:45:23.502207   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has current primary IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.502216   21531 main.go:141] libmachine: (ha-454952-m02) Reserving static IP address...
	I0404 21:45:23.502614   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find host DHCP lease matching {name: "ha-454952-m02", mac: "52:54:00:0e:de:98", ip: "192.168.39.60"} in network mk-ha-454952
	I0404 21:45:23.579059   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Getting to WaitForSSH function...
	I0404 21:45:23.579087   21531 main.go:141] libmachine: (ha-454952-m02) Reserved static IP address: 192.168.39.60
	I0404 21:45:23.579125   21531 main.go:141] libmachine: (ha-454952-m02) Waiting for SSH to be available...
	I0404 21:45:23.581914   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:23.582282   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952
	I0404 21:45:23.582310   21531 main.go:141] libmachine: (ha-454952-m02) DBG | unable to find defined IP address of network mk-ha-454952 interface with MAC address 52:54:00:0e:de:98
	I0404 21:45:23.582468   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH client type: external
	I0404 21:45:23.582499   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa (-rw-------)
	I0404 21:45:23.582529   21531 main.go:141] libmachine: (ha-454952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:45:23.582543   21531 main.go:141] libmachine: (ha-454952-m02) DBG | About to run SSH command:
	I0404 21:45:23.582560   21531 main.go:141] libmachine: (ha-454952-m02) DBG | exit 0
	I0404 21:45:23.586935   21531 main.go:141] libmachine: (ha-454952-m02) DBG | SSH cmd err, output: exit status 255: 
	I0404 21:45:23.586958   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0404 21:45:23.586968   21531 main.go:141] libmachine: (ha-454952-m02) DBG | command : exit 0
	I0404 21:45:23.586975   21531 main.go:141] libmachine: (ha-454952-m02) DBG | err     : exit status 255
	I0404 21:45:23.587009   21531 main.go:141] libmachine: (ha-454952-m02) DBG | output  : 
	I0404 21:45:26.587489   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Getting to WaitForSSH function...
	I0404 21:45:26.590334   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.590710   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.590734   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.590919   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH client type: external
	I0404 21:45:26.590947   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa (-rw-------)
	I0404 21:45:26.590990   21531 main.go:141] libmachine: (ha-454952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:45:26.591006   21531 main.go:141] libmachine: (ha-454952-m02) DBG | About to run SSH command:
	I0404 21:45:26.591044   21531 main.go:141] libmachine: (ha-454952-m02) DBG | exit 0
	I0404 21:45:26.720957   21531 main.go:141] libmachine: (ha-454952-m02) DBG | SSH cmd err, output: <nil>: 
	I0404 21:45:26.721239   21531 main.go:141] libmachine: (ha-454952-m02) KVM machine creation complete!
	I0404 21:45:26.721562   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:26.722111   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:26.722318   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:26.722460   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:45:26.722476   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 21:45:26.723684   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:45:26.723697   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:45:26.723703   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:45:26.723708   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.725754   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.726161   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.726182   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.726335   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.726553   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.726766   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.726951   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.727140   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.727343   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.727355   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:45:26.836708   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:45:26.836734   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:45:26.836744   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.839938   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.840332   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.840361   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.840569   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.840783   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.840943   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.841059   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.841253   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.841476   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.841495   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:45:26.953111   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:45:26.953181   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:45:26.953192   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:45:26.953204   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:26.953474   21531 buildroot.go:166] provisioning hostname "ha-454952-m02"
	I0404 21:45:26.953502   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:26.953659   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:26.956549   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.956908   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:26.956937   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:26.957079   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:26.957236   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.957390   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:26.957532   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:26.957687   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:26.957867   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:26.957892   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952-m02 && echo "ha-454952-m02" | sudo tee /etc/hostname
	I0404 21:45:27.083989   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952-m02
	
	I0404 21:45:27.084014   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.086982   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.087393   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.087424   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.087609   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.087793   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.087937   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.088043   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.088286   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:27.088452   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:27.088469   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:45:27.206028   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:45:27.206055   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:45:27.206074   21531 buildroot.go:174] setting up certificates
	I0404 21:45:27.206086   21531 provision.go:84] configureAuth start
	I0404 21:45:27.206096   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetMachineName
	I0404 21:45:27.206369   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:27.208940   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.209285   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.209319   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.209470   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.211924   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.212290   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.212318   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.212425   21531 provision.go:143] copyHostCerts
	I0404 21:45:27.212472   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:45:27.212511   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:45:27.212523   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:45:27.212612   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:45:27.212702   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:45:27.212728   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:45:27.212736   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:45:27.212774   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:45:27.212834   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:45:27.212858   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:45:27.212874   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:45:27.212910   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:45:27.212993   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952-m02 san=[127.0.0.1 192.168.39.60 ha-454952-m02 localhost minikube]
	I0404 21:45:27.444142   21531 provision.go:177] copyRemoteCerts
	I0404 21:45:27.444192   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:45:27.444216   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.447017   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.447404   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.447433   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.447591   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.447809   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.448004   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.448148   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:27.537079   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:45:27.537138   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:45:27.564140   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:45:27.564219   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0404 21:45:27.591891   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:45:27.591959   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:45:27.618967   21531 provision.go:87] duration metric: took 412.871453ms to configureAuth
	I0404 21:45:27.618995   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:45:27.619165   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:27.619229   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.622532   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.622976   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.623008   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.623143   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.623365   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.623535   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.623667   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.623824   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:27.623983   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:27.623997   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:45:27.928735   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:45:27.928789   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:45:27.928798   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetURL
	I0404 21:45:27.930111   21531 main.go:141] libmachine: (ha-454952-m02) DBG | Using libvirt version 6000000
	I0404 21:45:27.932772   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.933200   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.933231   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.933409   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:45:27.933424   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:45:27.933439   21531 client.go:171] duration metric: took 27.631265815s to LocalClient.Create
	I0404 21:45:27.933461   21531 start.go:167] duration metric: took 27.631314558s to libmachine.API.Create "ha-454952"
	I0404 21:45:27.933470   21531 start.go:293] postStartSetup for "ha-454952-m02" (driver="kvm2")
	I0404 21:45:27.933480   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:45:27.933499   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:27.933704   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:45:27.933724   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:27.936189   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.936512   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:27.936541   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:27.936669   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:27.936876   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:27.937042   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:27.937234   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.023309   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:45:28.027805   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:45:28.027836   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:45:28.027903   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:45:28.027969   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:45:28.027980   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:45:28.028088   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:45:28.038297   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:45:28.063041   21531 start.go:296] duration metric: took 129.558479ms for postStartSetup
	I0404 21:45:28.063098   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetConfigRaw
	I0404 21:45:28.063738   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:28.066667   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.067100   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.067124   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.067352   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:45:28.067582   21531 start.go:128] duration metric: took 27.788519902s to createHost
	I0404 21:45:28.067612   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:28.071313   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.071654   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.071688   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.071814   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.072005   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.072209   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.072354   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.072502   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:45:28.072691   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0404 21:45:28.072701   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:45:28.185571   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267128.163702316
	
	I0404 21:45:28.185598   21531 fix.go:216] guest clock: 1712267128.163702316
	I0404 21:45:28.185608   21531 fix.go:229] Guest: 2024-04-04 21:45:28.163702316 +0000 UTC Remote: 2024-04-04 21:45:28.067598122 +0000 UTC m=+85.464231324 (delta=96.104194ms)
	I0404 21:45:28.185633   21531 fix.go:200] guest clock delta is within tolerance: 96.104194ms
	I0404 21:45:28.185639   21531 start.go:83] releasing machines lock for "ha-454952-m02", held for 27.906690079s
	I0404 21:45:28.185663   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.185952   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:28.188559   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.188890   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.188919   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.191416   21531 out.go:177] * Found network options:
	I0404 21:45:28.192897   21531 out.go:177]   - NO_PROXY=192.168.39.13
	W0404 21:45:28.194105   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:45:28.194140   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.194757   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.194929   21531 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 21:45:28.195009   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:45:28.195049   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	W0404 21:45:28.195155   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:45:28.195239   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:45:28.195259   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 21:45:28.197662   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198021   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198073   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.198091   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198296   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.198423   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:28.198453   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.198452   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:28.198717   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 21:45:28.198726   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.198949   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.198967   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 21:45:28.199166   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 21:45:28.199327   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 21:45:28.433038   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:45:28.439811   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:45:28.439886   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:45:28.457393   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:45:28.457423   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:45:28.457490   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:45:28.474546   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:45:28.489787   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:45:28.489847   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:45:28.503963   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:45:28.518290   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:45:28.637383   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:45:28.788758   21531 docker.go:233] disabling docker service ...
	I0404 21:45:28.788826   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:45:28.805511   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:45:28.819427   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:45:28.959689   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:45:29.103883   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:45:29.118755   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:45:29.139139   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:45:29.139213   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.150656   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:45:29.150730   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.162665   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.175117   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.187243   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:45:29.199827   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.212464   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.233434   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:45:29.245487   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:45:29.256575   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:45:29.256640   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:45:29.272739   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:45:29.284733   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:45:29.413393   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:45:29.560029   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:45:29.560102   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:45:29.565394   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:45:29.565444   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:45:29.570093   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:45:29.609360   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:45:29.609434   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:45:29.641317   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:45:29.672765   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:45:29.674396   21531 out.go:177]   - env NO_PROXY=192.168.39.13
	I0404 21:45:29.675983   21531 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 21:45:29.678787   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:29.679137   21531 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:45:15 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 21:45:29.679157   21531 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 21:45:29.679434   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:45:29.683924   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:45:29.698250   21531 mustload.go:65] Loading cluster: ha-454952
	I0404 21:45:29.698463   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:45:29.698722   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:29.698754   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:29.714030   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0404 21:45:29.714397   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:29.714808   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:29.714824   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:29.715195   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:29.715375   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:45:29.716904   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:45:29.717311   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:29.717342   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:29.731650   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0404 21:45:29.732057   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:29.732518   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:29.732541   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:29.732922   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:29.733111   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:45:29.733311   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.60
	I0404 21:45:29.733323   21531 certs.go:194] generating shared ca certs ...
	I0404 21:45:29.733339   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.733478   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:45:29.733530   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:45:29.733543   21531 certs.go:256] generating profile certs ...
	I0404 21:45:29.733715   21531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:45:29.733751   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f
	I0404 21:45:29.733772   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.254]
	I0404 21:45:29.807683   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f ...
	I0404 21:45:29.807716   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f: {Name:mkd103717d1c351620973f640a9417354542e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.807906   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f ...
	I0404 21:45:29.807924   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f: {Name:mk07c5ec9d008651c2ca286887884086db0afe24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:45:29.808022   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.6b270c4f -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:45:29.808212   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.6b270c4f -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:45:29.808396   21531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:45:29.808414   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:45:29.808431   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:45:29.808450   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:45:29.808468   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:45:29.808493   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:45:29.808510   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:45:29.808524   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:45:29.808542   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:45:29.808624   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:45:29.808665   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:45:29.808678   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:45:29.808708   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:45:29.808739   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:45:29.808770   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:45:29.808837   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:45:29.808877   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:45:29.808896   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:29.808913   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:45:29.808950   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:45:29.812039   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:29.812452   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:45:29.812472   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:29.812658   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:45:29.812831   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:45:29.812989   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:45:29.813160   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:45:29.892540   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0404 21:45:29.898217   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0404 21:45:29.911157   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0404 21:45:29.915892   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0404 21:45:29.928090   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0404 21:45:29.932403   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0404 21:45:29.943834   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0404 21:45:29.948036   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0404 21:45:29.960525   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0404 21:45:29.965239   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0404 21:45:29.981031   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0404 21:45:29.985580   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0404 21:45:29.997512   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:45:30.024317   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:45:30.051187   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:45:30.077854   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:45:30.105971   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0404 21:45:30.131831   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 21:45:30.157884   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:45:30.183114   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:45:30.211074   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:45:30.237872   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:45:30.265115   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:45:30.292810   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0404 21:45:30.314525   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0404 21:45:30.332072   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0404 21:45:30.349494   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0404 21:45:30.368701   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0404 21:45:30.387574   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0404 21:45:30.405763   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0404 21:45:30.423168   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:45:30.429038   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:45:30.441069   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.446531   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.446592   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:45:30.452986   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:45:30.465883   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:45:30.477901   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.482627   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.482682   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:45:30.489021   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:45:30.502287   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:45:30.515896   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.520543   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.520605   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:45:30.526429   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:45:30.538115   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:45:30.542417   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:45:30.542475   21531 kubeadm.go:928] updating node {m02 192.168.39.60 8443 v1.29.3 crio true true} ...
	I0404 21:45:30.542554   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:45:30.542578   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:45:30.542611   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:45:30.561396   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:45:30.561537   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:45:30.561595   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:45:30.573506   21531 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0404 21:45:30.573557   21531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0404 21:45:30.584050   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0404 21:45:30.584083   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:45:30.584153   21531 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0404 21:45:30.584167   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:45:30.584191   21531 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0404 21:45:30.588823   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0404 21:45:30.588855   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0404 21:45:56.438742   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:45:56.454469   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:45:56.454568   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:45:56.458893   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0404 21:45:56.458926   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0404 21:45:58.191023   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:45:58.191110   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:45:58.196342   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0404 21:45:58.196372   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0404 21:45:58.450605   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0404 21:45:58.460793   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0404 21:45:58.478720   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:45:58.497698   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:45:58.515695   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:45:58.519999   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:45:58.533166   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:45:58.664897   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:45:58.682498   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:45:58.682825   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:45:58.682860   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:45:58.698067   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0404 21:45:58.698482   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:45:58.699051   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:45:58.699078   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:45:58.699411   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:45:58.699647   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:45:58.699821   21531 start.go:316] joinCluster: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:45:58.699914   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0404 21:45:58.699929   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:45:58.702998   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:58.703459   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:45:58.703488   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:45:58.703633   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:45:58.703805   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:45:58.703972   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:45:58.704105   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:45:58.887846   21531 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:45:58.887889   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2defvu.xmfc923okok4qteb --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m02 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443"
	I0404 21:46:23.956199   21531 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2defvu.xmfc923okok4qteb --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m02 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443": (25.068283341s)
	I0404 21:46:23.956235   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0404 21:46:24.469532   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952-m02 minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=false
	I0404 21:46:24.622440   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-454952-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0404 21:46:24.729960   21531 start.go:318] duration metric: took 26.030136183s to joinCluster
	I0404 21:46:24.730023   21531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:46:24.731971   21531 out.go:177] * Verifying Kubernetes components...
	I0404 21:46:24.730302   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:46:24.733336   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:46:24.925708   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:46:24.989603   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:46:24.989909   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0404 21:46:24.989985   21531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.13:8443
	I0404 21:46:24.990268   21531 node_ready.go:35] waiting up to 6m0s for node "ha-454952-m02" to be "Ready" ...
	I0404 21:46:24.990356   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:24.990367   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:24.990377   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:24.990386   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.002065   21531 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0404 21:46:25.490882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:25.490901   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:25.490909   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:25.490915   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.494631   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:25.990628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:25.990654   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:25.990666   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:25.990679   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:25.993916   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.491434   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:26.491458   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:26.491469   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:26.491475   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:26.495319   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.990465   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:26.990487   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:26.990495   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:26.990499   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:26.994146   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:26.995086   21531 node_ready.go:53] node "ha-454952-m02" has status "Ready":"False"
	I0404 21:46:27.491398   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:27.491421   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:27.491458   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:27.491464   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:27.494894   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:27.991073   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:27.991098   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:27.991107   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:27.991116   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:27.995056   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:28.491179   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:28.491207   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:28.491218   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:28.491226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:28.495320   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:28.991235   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:28.991257   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:28.991266   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:28.991273   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:28.995835   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:28.996482   21531 node_ready.go:53] node "ha-454952-m02" has status "Ready":"False"
	I0404 21:46:29.490887   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:29.490908   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:29.490914   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:29.490917   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:29.494469   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:29.991300   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:29.991335   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:29.991342   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:29.991346   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:29.994389   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.491083   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:30.491102   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.491110   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.491113   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.494483   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.495199   21531 node_ready.go:49] node "ha-454952-m02" has status "Ready":"True"
	I0404 21:46:30.495227   21531 node_ready.go:38] duration metric: took 5.504929948s for node "ha-454952-m02" to be "Ready" ...
	I0404 21:46:30.495236   21531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:46:30.495373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:30.495385   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.495392   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.495396   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.500629   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:46:30.506720   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.506809   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-9qsz7
	I0404 21:46:30.506822   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.506831   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.506838   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.510005   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.510750   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.510775   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.510781   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.510785   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.513908   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.514394   21531 pod_ready.go:92] pod "coredns-76f75df574-9qsz7" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.514413   21531 pod_ready.go:81] duration metric: took 7.670219ms for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.514423   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.514473   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hsdfw
	I0404 21:46:30.514480   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.514487   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.514492   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.517301   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.517882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.517898   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.517905   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.517910   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.520578   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.521155   21531 pod_ready.go:92] pod "coredns-76f75df574-hsdfw" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.521172   21531 pod_ready.go:81] duration metric: took 6.743286ms for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.521181   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.521239   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952
	I0404 21:46:30.521249   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.521256   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.521260   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.524258   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.525102   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:30.525124   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.525131   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.525137   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.528292   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:30.529146   21531 pod_ready.go:92] pod "etcd-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:30.529166   21531 pod_ready.go:81] duration metric: took 7.977704ms for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.529175   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:30.529263   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:30.529276   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.529283   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.529287   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.532091   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:30.532889   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:30.532905   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:30.532915   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:30.532918   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:30.535402   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:31.029639   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:31.029662   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.029670   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.029673   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.033490   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.034087   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:31.034103   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.034111   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.034115   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.037298   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.529424   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:31.529444   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.529450   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.529454   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.533195   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:31.534076   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:31.534098   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:31.534108   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:31.534117   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:31.537925   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.029843   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:32.029869   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.029878   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.029881   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.033777   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.034534   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:32.034547   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.034553   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.034559   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.037396   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:32.530229   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:32.530267   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.530275   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.530279   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.534214   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.535354   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:32.535372   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:32.535379   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:32.535382   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:32.538606   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:32.539304   21531 pod_ready.go:102] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"False"
	I0404 21:46:33.029394   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:33.029425   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.029433   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.029437   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.033398   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:33.034003   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:33.034019   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.034028   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.034034   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.037004   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:33.530224   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:33.530253   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.530262   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.530272   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.533909   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:33.534823   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:33.534840   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:33.534847   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:33.534851   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:33.537652   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:34.030372   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:34.030393   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.030401   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.030405   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.039930   21531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0404 21:46:34.041397   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:34.041417   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.041428   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.041431   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.045249   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:34.529378   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:34.529417   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.529424   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.529428   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.533374   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:34.534202   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:34.534218   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:34.534225   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:34.534229   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:34.537317   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:35.029373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:35.029397   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.029405   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.029410   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.033450   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.034195   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.034208   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.034215   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.034220   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.037228   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.037748   21531 pod_ready.go:102] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"False"
	I0404 21:46:35.529695   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:46:35.529714   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.529721   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.529725   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.533941   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.534944   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.534963   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.534974   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.534978   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.537794   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.538543   21531 pod_ready.go:92] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.538561   21531 pod_ready.go:81] duration metric: took 5.009380502s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.538575   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.538628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952
	I0404 21:46:35.538636   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.538642   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.538646   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.541590   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.542255   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.542274   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.542285   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.542292   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.544857   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.545522   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.545545   21531 pod_ready.go:81] duration metric: took 6.963641ms for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.545558   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.545628   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m02
	I0404 21:46:35.545637   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.545645   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.545652   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.548205   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.548881   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:35.548895   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.548901   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.548904   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.551179   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.551729   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.551746   21531 pod_ready.go:81] duration metric: took 6.180806ms for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.551755   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.551803   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:46:35.551811   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.551818   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.551820   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.554254   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:35.691230   21531 request.go:629] Waited for 136.263257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.691286   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:35.691292   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.691311   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.691321   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.696097   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:35.697549   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:35.697577   21531 pod_ready.go:81] duration metric: took 145.814687ms for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.697593   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:35.892070   21531 request.go:629] Waited for 194.408263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:46:35.892178   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:46:35.892189   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:35.892197   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:35.892203   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:35.895814   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.091210   21531 request.go:629] Waited for 194.316591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.091276   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.091282   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.091289   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.091292   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.094834   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.095670   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.095693   21531 pod_ready.go:81] duration metric: took 398.091423ms for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.095705   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.291768   21531 request.go:629] Waited for 195.980439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:46:36.291834   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:46:36.291856   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.291864   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.291867   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.295259   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.491226   21531 request.go:629] Waited for 195.287616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.491325   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:36.491339   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.491346   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.491350   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.494357   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:46:36.494867   21531 pod_ready.go:92] pod "kube-proxy-6nkxm" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.494883   21531 pod_ready.go:81] duration metric: took 399.17144ms for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.494893   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.692033   21531 request.go:629] Waited for 197.066541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:46:36.692108   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:46:36.692113   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.692133   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.692138   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.695596   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:36.891960   21531 request.go:629] Waited for 195.407458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:36.892024   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:36.892032   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:36.892042   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:36.892054   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:36.898107   21531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0404 21:46:36.898810   21531 pod_ready.go:92] pod "kube-proxy-gjvm9" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:36.898829   21531 pod_ready.go:81] duration metric: took 403.928463ms for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:36.898841   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.091944   21531 request.go:629] Waited for 193.041942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:46:37.092009   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:46:37.092015   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.092022   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.092027   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.096064   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:46:37.291096   21531 request.go:629] Waited for 194.285325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:37.291170   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:46:37.291175   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.291183   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.291187   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.294221   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.294848   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:37.294886   21531 pod_ready.go:81] duration metric: took 396.037372ms for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.294899   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.491988   21531 request.go:629] Waited for 197.014907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:46:37.492058   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:46:37.492068   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.492076   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.492085   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.495596   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.691545   21531 request.go:629] Waited for 195.216161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:37.691627   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:46:37.691634   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.691645   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.691652   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.695020   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:37.695705   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:46:37.695724   21531 pod_ready.go:81] duration metric: took 400.817481ms for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:46:37.695734   21531 pod_ready.go:38] duration metric: took 7.200463659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:46:37.695748   21531 api_server.go:52] waiting for apiserver process to appear ...
	I0404 21:46:37.695799   21531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:46:37.711868   21531 api_server.go:72] duration metric: took 12.981814066s to wait for apiserver process to appear ...
	I0404 21:46:37.711900   21531 api_server.go:88] waiting for apiserver healthz status ...
	I0404 21:46:37.711924   21531 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0404 21:46:37.717849   21531 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0404 21:46:37.717911   21531 round_trippers.go:463] GET https://192.168.39.13:8443/version
	I0404 21:46:37.717917   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.717924   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.717928   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.718819   21531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0404 21:46:37.718958   21531 api_server.go:141] control plane version: v1.29.3
	I0404 21:46:37.718980   21531 api_server.go:131] duration metric: took 7.072339ms to wait for apiserver health ...
	I0404 21:46:37.718991   21531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 21:46:37.891584   21531 request.go:629] Waited for 172.519797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:37.891683   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:37.891694   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:37.891705   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:37.891714   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:37.900096   21531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0404 21:46:37.906669   21531 system_pods.go:59] 17 kube-system pods found
	I0404 21:46:37.906715   21531 system_pods.go:61] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:46:37.906724   21531 system_pods.go:61] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:46:37.906729   21531 system_pods.go:61] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:46:37.906733   21531 system_pods.go:61] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:46:37.906737   21531 system_pods.go:61] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:46:37.906741   21531 system_pods.go:61] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:46:37.906746   21531 system_pods.go:61] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:46:37.906751   21531 system_pods.go:61] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:46:37.906757   21531 system_pods.go:61] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:46:37.906762   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:46:37.906770   21531 system_pods.go:61] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:46:37.906776   21531 system_pods.go:61] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:46:37.906783   21531 system_pods.go:61] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:46:37.906789   21531 system_pods.go:61] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:46:37.906794   21531 system_pods.go:61] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:46:37.906799   21531 system_pods.go:61] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:46:37.906808   21531 system_pods.go:61] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:46:37.906817   21531 system_pods.go:74] duration metric: took 187.815542ms to wait for pod list to return data ...
	I0404 21:46:37.906831   21531 default_sa.go:34] waiting for default service account to be created ...
	I0404 21:46:38.091194   21531 request.go:629] Waited for 184.268682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:46:38.091273   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:46:38.091287   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.091298   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.091304   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.095221   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:38.095441   21531 default_sa.go:45] found service account: "default"
	I0404 21:46:38.095458   21531 default_sa.go:55] duration metric: took 188.620189ms for default service account to be created ...
	I0404 21:46:38.095468   21531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 21:46:38.291929   21531 request.go:629] Waited for 196.380448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:38.292006   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:46:38.292014   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.292024   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.292030   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.297802   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:46:38.302343   21531 system_pods.go:86] 17 kube-system pods found
	I0404 21:46:38.302372   21531 system_pods.go:89] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:46:38.302378   21531 system_pods.go:89] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:46:38.302383   21531 system_pods.go:89] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:46:38.302387   21531 system_pods.go:89] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:46:38.302391   21531 system_pods.go:89] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:46:38.302395   21531 system_pods.go:89] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:46:38.302398   21531 system_pods.go:89] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:46:38.302402   21531 system_pods.go:89] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:46:38.302407   21531 system_pods.go:89] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:46:38.302411   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:46:38.302415   21531 system_pods.go:89] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:46:38.302418   21531 system_pods.go:89] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:46:38.302422   21531 system_pods.go:89] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:46:38.302429   21531 system_pods.go:89] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:46:38.302433   21531 system_pods.go:89] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:46:38.302439   21531 system_pods.go:89] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:46:38.302443   21531 system_pods.go:89] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:46:38.302451   21531 system_pods.go:126] duration metric: took 206.976769ms to wait for k8s-apps to be running ...
	I0404 21:46:38.302461   21531 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 21:46:38.302504   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:46:38.319761   21531 system_svc.go:56] duration metric: took 17.288893ms WaitForService to wait for kubelet
	I0404 21:46:38.319805   21531 kubeadm.go:576] duration metric: took 13.58975508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:46:38.319828   21531 node_conditions.go:102] verifying NodePressure condition ...
	I0404 21:46:38.491192   21531 request.go:629] Waited for 171.296984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes
	I0404 21:46:38.491298   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes
	I0404 21:46:38.491309   21531 round_trippers.go:469] Request Headers:
	I0404 21:46:38.491321   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:46:38.491328   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:46:38.494827   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:46:38.495717   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:46:38.495737   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:46:38.495749   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:46:38.495753   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:46:38.495757   21531 node_conditions.go:105] duration metric: took 175.923144ms to run NodePressure ...
	I0404 21:46:38.495767   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:46:38.495790   21531 start.go:254] writing updated cluster config ...
	I0404 21:46:38.497976   21531 out.go:177] 
	I0404 21:46:38.499618   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:46:38.499746   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:46:38.501674   21531 out.go:177] * Starting "ha-454952-m03" control-plane node in "ha-454952" cluster
	I0404 21:46:38.502950   21531 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:46:38.502978   21531 cache.go:56] Caching tarball of preloaded images
	I0404 21:46:38.503087   21531 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:46:38.503100   21531 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:46:38.503204   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:46:38.503374   21531 start.go:360] acquireMachinesLock for ha-454952-m03: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:46:38.503417   21531 start.go:364] duration metric: took 23.763µs to acquireMachinesLock for "ha-454952-m03"
	I0404 21:46:38.503431   21531 start.go:93] Provisioning new machine with config: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:46:38.503520   21531 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0404 21:46:38.505236   21531 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 21:46:38.505341   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:46:38.505385   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:46:38.522036   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0404 21:46:38.522433   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:46:38.522935   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:46:38.522955   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:46:38.523285   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:46:38.523515   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:46:38.523647   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:46:38.523785   21531 start.go:159] libmachine.API.Create for "ha-454952" (driver="kvm2")
	I0404 21:46:38.523834   21531 client.go:168] LocalClient.Create starting
	I0404 21:46:38.523869   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 21:46:38.523903   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:46:38.523917   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:46:38.523969   21531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 21:46:38.523987   21531 main.go:141] libmachine: Decoding PEM data...
	I0404 21:46:38.523998   21531 main.go:141] libmachine: Parsing certificate...
	I0404 21:46:38.524013   21531 main.go:141] libmachine: Running pre-create checks...
	I0404 21:46:38.524021   21531 main.go:141] libmachine: (ha-454952-m03) Calling .PreCreateCheck
	I0404 21:46:38.524175   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:46:38.524522   21531 main.go:141] libmachine: Creating machine...
	I0404 21:46:38.524536   21531 main.go:141] libmachine: (ha-454952-m03) Calling .Create
	I0404 21:46:38.524669   21531 main.go:141] libmachine: (ha-454952-m03) Creating KVM machine...
	I0404 21:46:38.525942   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found existing default KVM network
	I0404 21:46:38.526083   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found existing private KVM network mk-ha-454952
	I0404 21:46:38.526218   21531 main.go:141] libmachine: (ha-454952-m03) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 ...
	I0404 21:46:38.526238   21531 main.go:141] libmachine: (ha-454952-m03) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:46:38.526258   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.526190   22299 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:46:38.526353   21531 main.go:141] libmachine: (ha-454952-m03) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 21:46:38.751166   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.751030   22299 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa...
	I0404 21:46:38.959700   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.959568   22299 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/ha-454952-m03.rawdisk...
	I0404 21:46:38.959728   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Writing magic tar header
	I0404 21:46:38.959739   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Writing SSH key tar header
	I0404 21:46:38.959751   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:38.959683   22299 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 ...
	I0404 21:46:38.959820   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03
	I0404 21:46:38.959856   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03 (perms=drwx------)
	I0404 21:46:38.959865   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 21:46:38.959873   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 21:46:38.959884   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 21:46:38.959893   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 21:46:38.959915   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 21:46:38.959934   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:46:38.959944   21531 main.go:141] libmachine: (ha-454952-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 21:46:38.959952   21531 main.go:141] libmachine: (ha-454952-m03) Creating domain...
	I0404 21:46:38.959998   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 21:46:38.960023   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 21:46:38.960034   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home/jenkins
	I0404 21:46:38.960046   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Checking permissions on dir: /home
	I0404 21:46:38.960062   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Skipping /home - not owner
	I0404 21:46:38.960996   21531 main.go:141] libmachine: (ha-454952-m03) define libvirt domain using xml: 
	I0404 21:46:38.961025   21531 main.go:141] libmachine: (ha-454952-m03) <domain type='kvm'>
	I0404 21:46:38.961033   21531 main.go:141] libmachine: (ha-454952-m03)   <name>ha-454952-m03</name>
	I0404 21:46:38.961040   21531 main.go:141] libmachine: (ha-454952-m03)   <memory unit='MiB'>2200</memory>
	I0404 21:46:38.961045   21531 main.go:141] libmachine: (ha-454952-m03)   <vcpu>2</vcpu>
	I0404 21:46:38.961050   21531 main.go:141] libmachine: (ha-454952-m03)   <features>
	I0404 21:46:38.961057   21531 main.go:141] libmachine: (ha-454952-m03)     <acpi/>
	I0404 21:46:38.961063   21531 main.go:141] libmachine: (ha-454952-m03)     <apic/>
	I0404 21:46:38.961070   21531 main.go:141] libmachine: (ha-454952-m03)     <pae/>
	I0404 21:46:38.961077   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961084   21531 main.go:141] libmachine: (ha-454952-m03)   </features>
	I0404 21:46:38.961094   21531 main.go:141] libmachine: (ha-454952-m03)   <cpu mode='host-passthrough'>
	I0404 21:46:38.961106   21531 main.go:141] libmachine: (ha-454952-m03)   
	I0404 21:46:38.961114   21531 main.go:141] libmachine: (ha-454952-m03)   </cpu>
	I0404 21:46:38.961141   21531 main.go:141] libmachine: (ha-454952-m03)   <os>
	I0404 21:46:38.961166   21531 main.go:141] libmachine: (ha-454952-m03)     <type>hvm</type>
	I0404 21:46:38.961176   21531 main.go:141] libmachine: (ha-454952-m03)     <boot dev='cdrom'/>
	I0404 21:46:38.961189   21531 main.go:141] libmachine: (ha-454952-m03)     <boot dev='hd'/>
	I0404 21:46:38.961199   21531 main.go:141] libmachine: (ha-454952-m03)     <bootmenu enable='no'/>
	I0404 21:46:38.961209   21531 main.go:141] libmachine: (ha-454952-m03)   </os>
	I0404 21:46:38.961217   21531 main.go:141] libmachine: (ha-454952-m03)   <devices>
	I0404 21:46:38.961229   21531 main.go:141] libmachine: (ha-454952-m03)     <disk type='file' device='cdrom'>
	I0404 21:46:38.961248   21531 main.go:141] libmachine: (ha-454952-m03)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/boot2docker.iso'/>
	I0404 21:46:38.961261   21531 main.go:141] libmachine: (ha-454952-m03)       <target dev='hdc' bus='scsi'/>
	I0404 21:46:38.961300   21531 main.go:141] libmachine: (ha-454952-m03)       <readonly/>
	I0404 21:46:38.961338   21531 main.go:141] libmachine: (ha-454952-m03)     </disk>
	I0404 21:46:38.961355   21531 main.go:141] libmachine: (ha-454952-m03)     <disk type='file' device='disk'>
	I0404 21:46:38.961370   21531 main.go:141] libmachine: (ha-454952-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 21:46:38.961408   21531 main.go:141] libmachine: (ha-454952-m03)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/ha-454952-m03.rawdisk'/>
	I0404 21:46:38.961432   21531 main.go:141] libmachine: (ha-454952-m03)       <target dev='hda' bus='virtio'/>
	I0404 21:46:38.961443   21531 main.go:141] libmachine: (ha-454952-m03)     </disk>
	I0404 21:46:38.961451   21531 main.go:141] libmachine: (ha-454952-m03)     <interface type='network'>
	I0404 21:46:38.961463   21531 main.go:141] libmachine: (ha-454952-m03)       <source network='mk-ha-454952'/>
	I0404 21:46:38.961475   21531 main.go:141] libmachine: (ha-454952-m03)       <model type='virtio'/>
	I0404 21:46:38.961487   21531 main.go:141] libmachine: (ha-454952-m03)     </interface>
	I0404 21:46:38.961499   21531 main.go:141] libmachine: (ha-454952-m03)     <interface type='network'>
	I0404 21:46:38.961510   21531 main.go:141] libmachine: (ha-454952-m03)       <source network='default'/>
	I0404 21:46:38.961528   21531 main.go:141] libmachine: (ha-454952-m03)       <model type='virtio'/>
	I0404 21:46:38.961546   21531 main.go:141] libmachine: (ha-454952-m03)     </interface>
	I0404 21:46:38.961563   21531 main.go:141] libmachine: (ha-454952-m03)     <serial type='pty'>
	I0404 21:46:38.961574   21531 main.go:141] libmachine: (ha-454952-m03)       <target port='0'/>
	I0404 21:46:38.961585   21531 main.go:141] libmachine: (ha-454952-m03)     </serial>
	I0404 21:46:38.961595   21531 main.go:141] libmachine: (ha-454952-m03)     <console type='pty'>
	I0404 21:46:38.961607   21531 main.go:141] libmachine: (ha-454952-m03)       <target type='serial' port='0'/>
	I0404 21:46:38.961622   21531 main.go:141] libmachine: (ha-454952-m03)     </console>
	I0404 21:46:38.961638   21531 main.go:141] libmachine: (ha-454952-m03)     <rng model='virtio'>
	I0404 21:46:38.961650   21531 main.go:141] libmachine: (ha-454952-m03)       <backend model='random'>/dev/random</backend>
	I0404 21:46:38.961660   21531 main.go:141] libmachine: (ha-454952-m03)     </rng>
	I0404 21:46:38.961670   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961682   21531 main.go:141] libmachine: (ha-454952-m03)     
	I0404 21:46:38.961697   21531 main.go:141] libmachine: (ha-454952-m03)   </devices>
	I0404 21:46:38.961710   21531 main.go:141] libmachine: (ha-454952-m03) </domain>
	I0404 21:46:38.961720   21531 main.go:141] libmachine: (ha-454952-m03) 
	I0404 21:46:38.968849   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:10:41:55 in network default
	I0404 21:46:38.969511   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring networks are active...
	I0404 21:46:38.969545   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:38.970384   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring network default is active
	I0404 21:46:38.970739   21531 main.go:141] libmachine: (ha-454952-m03) Ensuring network mk-ha-454952 is active
	I0404 21:46:38.971188   21531 main.go:141] libmachine: (ha-454952-m03) Getting domain xml...
	I0404 21:46:38.971925   21531 main.go:141] libmachine: (ha-454952-m03) Creating domain...
	I0404 21:46:40.197829   21531 main.go:141] libmachine: (ha-454952-m03) Waiting to get IP...
	I0404 21:46:40.198601   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.199014   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.199054   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.198993   22299 retry.go:31] will retry after 264.293345ms: waiting for machine to come up
	I0404 21:46:40.464550   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.464998   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.465026   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.464962   22299 retry.go:31] will retry after 277.153815ms: waiting for machine to come up
	I0404 21:46:40.743411   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:40.743942   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:40.743969   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:40.743888   22299 retry.go:31] will retry after 302.772126ms: waiting for machine to come up
	I0404 21:46:41.048485   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:41.048967   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:41.048994   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:41.048916   22299 retry.go:31] will retry after 554.26818ms: waiting for machine to come up
	I0404 21:46:41.604852   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:41.605279   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:41.605307   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:41.605243   22299 retry.go:31] will retry after 593.569938ms: waiting for machine to come up
	I0404 21:46:42.199905   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:42.200439   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:42.200468   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:42.200400   22299 retry.go:31] will retry after 781.69482ms: waiting for machine to come up
	I0404 21:46:42.983490   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:42.983956   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:42.983983   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:42.983919   22299 retry.go:31] will retry after 999.658039ms: waiting for machine to come up
	I0404 21:46:43.985049   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:43.985669   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:43.985699   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:43.985624   22299 retry.go:31] will retry after 1.386933992s: waiting for machine to come up
	I0404 21:46:45.374475   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:45.374922   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:45.374959   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:45.374865   22299 retry.go:31] will retry after 1.790186863s: waiting for machine to come up
	I0404 21:46:47.167264   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:47.167792   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:47.167827   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:47.167749   22299 retry.go:31] will retry after 2.034077008s: waiting for machine to come up
	I0404 21:46:49.203112   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:49.203633   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:49.203662   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:49.203590   22299 retry.go:31] will retry after 2.285549921s: waiting for machine to come up
	I0404 21:46:51.491955   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:51.492431   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:51.492460   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:51.492366   22299 retry.go:31] will retry after 2.436406698s: waiting for machine to come up
	I0404 21:46:53.929897   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:53.930303   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:53.930330   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:53.930266   22299 retry.go:31] will retry after 4.105717474s: waiting for machine to come up
	I0404 21:46:58.038094   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:46:58.038630   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find current IP address of domain ha-454952-m03 in network mk-ha-454952
	I0404 21:46:58.038657   21531 main.go:141] libmachine: (ha-454952-m03) DBG | I0404 21:46:58.038586   22299 retry.go:31] will retry after 4.207781957s: waiting for machine to come up
	I0404 21:47:02.250815   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.251320   21531 main.go:141] libmachine: (ha-454952-m03) Found IP for machine: 192.168.39.217
	I0404 21:47:02.251340   21531 main.go:141] libmachine: (ha-454952-m03) Reserving static IP address...
	I0404 21:47:02.251353   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has current primary IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.251822   21531 main.go:141] libmachine: (ha-454952-m03) DBG | unable to find host DHCP lease matching {name: "ha-454952-m03", mac: "52:54:00:9a:12:2d", ip: "192.168.39.217"} in network mk-ha-454952
	I0404 21:47:02.327917   21531 main.go:141] libmachine: (ha-454952-m03) Reserved static IP address: 192.168.39.217
	I0404 21:47:02.327960   21531 main.go:141] libmachine: (ha-454952-m03) Waiting for SSH to be available...
	I0404 21:47:02.327971   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Getting to WaitForSSH function...
	I0404 21:47:02.330218   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.330589   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.330622   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.330775   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using SSH client type: external
	I0404 21:47:02.330809   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa (-rw-------)
	I0404 21:47:02.330839   21531 main.go:141] libmachine: (ha-454952-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 21:47:02.330851   21531 main.go:141] libmachine: (ha-454952-m03) DBG | About to run SSH command:
	I0404 21:47:02.330869   21531 main.go:141] libmachine: (ha-454952-m03) DBG | exit 0
	I0404 21:47:02.460413   21531 main.go:141] libmachine: (ha-454952-m03) DBG | SSH cmd err, output: <nil>: 
	I0404 21:47:02.460800   21531 main.go:141] libmachine: (ha-454952-m03) KVM machine creation complete!
	I0404 21:47:02.461059   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:47:02.461581   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:02.461784   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:02.461974   21531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 21:47:02.461989   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:47:02.463411   21531 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 21:47:02.463429   21531 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 21:47:02.463446   21531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 21:47:02.463453   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.465846   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.466279   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.466310   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.466517   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.466719   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.466916   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.467061   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.467198   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.467427   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.467440   21531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 21:47:02.571581   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:47:02.571618   21531 main.go:141] libmachine: Detecting the provisioner...
	I0404 21:47:02.571648   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.574609   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.575029   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.575072   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.575328   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.575580   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.575729   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.575877   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.576045   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.576242   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.576253   21531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 21:47:02.681449   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 21:47:02.681513   21531 main.go:141] libmachine: found compatible host: buildroot
	I0404 21:47:02.681520   21531 main.go:141] libmachine: Provisioning with buildroot...
	I0404 21:47:02.681528   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.681763   21531 buildroot.go:166] provisioning hostname "ha-454952-m03"
	I0404 21:47:02.681792   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.681994   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.684978   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.685335   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.685363   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.685478   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.685659   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.685826   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.685949   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.686152   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.686350   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.686367   21531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952-m03 && echo "ha-454952-m03" | sudo tee /etc/hostname
	I0404 21:47:02.808594   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952-m03
	
	I0404 21:47:02.808621   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.811675   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.812015   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.812041   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.812263   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:02.812459   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.812609   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:02.812713   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:02.812839   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:02.813038   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:02.813071   21531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:47:02.932179   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:47:02.932211   21531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:47:02.932229   21531 buildroot.go:174] setting up certificates
	I0404 21:47:02.932248   21531 provision.go:84] configureAuth start
	I0404 21:47:02.932264   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetMachineName
	I0404 21:47:02.932561   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:02.934986   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.935325   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.935354   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.935473   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:02.937751   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.938068   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:02.938095   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:02.938196   21531 provision.go:143] copyHostCerts
	I0404 21:47:02.938224   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:47:02.938261   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:47:02.938273   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:47:02.938344   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:47:02.938438   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:47:02.938463   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:47:02.938471   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:47:02.938512   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:47:02.938575   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:47:02.938597   21531 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:47:02.938610   21531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:47:02.938647   21531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:47:02.938710   21531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952-m03 san=[127.0.0.1 192.168.39.217 ha-454952-m03 localhost minikube]
	I0404 21:47:03.114002   21531 provision.go:177] copyRemoteCerts
	I0404 21:47:03.114058   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:47:03.114079   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.116814   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.117222   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.117250   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.117449   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.117660   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.117830   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.117979   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.207569   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:47:03.207651   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0404 21:47:03.239055   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:47:03.239122   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:47:03.269252   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:47:03.269316   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:47:03.299508   21531 provision.go:87] duration metric: took 367.244373ms to configureAuth
	I0404 21:47:03.299539   21531 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:47:03.299802   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:03.299883   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.302546   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.302965   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.303007   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.303144   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.303334   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.303530   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.303668   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.303835   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:03.304007   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:03.304021   21531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:47:03.589600   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:47:03.589641   21531 main.go:141] libmachine: Checking connection to Docker...
	I0404 21:47:03.589654   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetURL
	I0404 21:47:03.591172   21531 main.go:141] libmachine: (ha-454952-m03) DBG | Using libvirt version 6000000
	I0404 21:47:03.593791   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.594282   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.594309   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.594507   21531 main.go:141] libmachine: Docker is up and running!
	I0404 21:47:03.594522   21531 main.go:141] libmachine: Reticulating splines...
	I0404 21:47:03.594529   21531 client.go:171] duration metric: took 25.070684836s to LocalClient.Create
	I0404 21:47:03.594549   21531 start.go:167] duration metric: took 25.070764129s to libmachine.API.Create "ha-454952"
	I0404 21:47:03.594556   21531 start.go:293] postStartSetup for "ha-454952-m03" (driver="kvm2")
	I0404 21:47:03.594568   21531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:47:03.594583   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.594861   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:47:03.594884   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.597411   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.597944   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.597982   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.598152   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.598348   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.598537   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.598734   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.683420   21531 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:47:03.688599   21531 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:47:03.688621   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:47:03.688680   21531 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:47:03.688775   21531 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:47:03.688791   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:47:03.688911   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:47:03.699405   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:47:03.728970   21531 start.go:296] duration metric: took 134.401187ms for postStartSetup
	I0404 21:47:03.729023   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetConfigRaw
	I0404 21:47:03.729580   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:03.732110   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.732509   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.732541   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.732785   21531 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:47:03.732967   21531 start.go:128] duration metric: took 25.229435833s to createHost
	I0404 21:47:03.732989   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.735151   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.735465   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.735491   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.735597   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.735752   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.735931   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.736070   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.736247   21531 main.go:141] libmachine: Using SSH client type: native
	I0404 21:47:03.736407   21531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0404 21:47:03.736418   21531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:47:03.841380   21531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267223.811853161
	
	I0404 21:47:03.841403   21531 fix.go:216] guest clock: 1712267223.811853161
	I0404 21:47:03.841410   21531 fix.go:229] Guest: 2024-04-04 21:47:03.811853161 +0000 UTC Remote: 2024-04-04 21:47:03.732979005 +0000 UTC m=+181.129612197 (delta=78.874156ms)
	I0404 21:47:03.841424   21531 fix.go:200] guest clock delta is within tolerance: 78.874156ms
	I0404 21:47:03.841429   21531 start.go:83] releasing machines lock for "ha-454952-m03", held for 25.338005514s
	I0404 21:47:03.841454   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.841735   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:03.844330   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.844672   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.844704   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.847227   21531 out.go:177] * Found network options:
	I0404 21:47:03.848931   21531 out.go:177]   - NO_PROXY=192.168.39.13,192.168.39.60
	W0404 21:47:03.850171   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0404 21:47:03.850197   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:47:03.850216   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.850838   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.851027   21531 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:47:03.851124   21531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:47:03.851161   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	W0404 21:47:03.851221   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0404 21:47:03.851245   21531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0404 21:47:03.851303   21531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:47:03.851321   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:47:03.853996   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854291   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854426   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.854453   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854609   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.854719   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:03.854755   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:03.854819   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.854932   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:47:03.855016   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.855091   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:47:03.855130   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:03.855343   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:47:03.855487   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:47:04.099816   21531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:47:04.106362   21531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:47:04.106421   21531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:47:04.123378   21531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 21:47:04.123410   21531 start.go:494] detecting cgroup driver to use...
	I0404 21:47:04.123488   21531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:47:04.141852   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:47:04.159165   21531 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:47:04.159229   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:47:04.177006   21531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:47:04.194125   21531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:47:04.327940   21531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:47:04.504790   21531 docker.go:233] disabling docker service ...
	I0404 21:47:04.504863   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:47:04.520940   21531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:47:04.535619   21531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:47:04.681131   21531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:47:04.832749   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:47:04.850027   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:47:04.870589   21531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:47:04.870640   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.883131   21531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:47:04.883221   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.895438   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.906843   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.920442   21531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:47:04.935807   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.947559   21531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.966537   21531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:47:04.979817   21531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:47:04.993294   21531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 21:47:04.993370   21531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 21:47:05.009157   21531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:47:05.020517   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:05.149876   21531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:47:05.294829   21531 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:47:05.294893   21531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:47:05.300168   21531 start.go:562] Will wait 60s for crictl version
	I0404 21:47:05.300230   21531 ssh_runner.go:195] Run: which crictl
	I0404 21:47:05.304472   21531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:47:05.347248   21531 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:47:05.347328   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:47:05.377891   21531 ssh_runner.go:195] Run: crio --version
	I0404 21:47:05.413271   21531 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:47:05.414917   21531 out.go:177]   - env NO_PROXY=192.168.39.13
	I0404 21:47:05.416432   21531 out.go:177]   - env NO_PROXY=192.168.39.13,192.168.39.60
	I0404 21:47:05.418002   21531 main.go:141] libmachine: (ha-454952-m03) Calling .GetIP
	I0404 21:47:05.420812   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:05.421166   21531 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:47:05.421211   21531 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:47:05.421406   21531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:47:05.426334   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:47:05.439120   21531 mustload.go:65] Loading cluster: ha-454952
	I0404 21:47:05.439353   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:05.439598   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:05.439640   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:05.457894   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0404 21:47:05.458324   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:05.458931   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:05.458957   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:05.459279   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:05.459522   21531 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:47:05.461375   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:47:05.461816   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:05.461864   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:05.478759   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0404 21:47:05.479203   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:05.479725   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:05.479746   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:05.480083   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:05.480272   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:47:05.480420   21531 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.217
	I0404 21:47:05.480433   21531 certs.go:194] generating shared ca certs ...
	I0404 21:47:05.480453   21531 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.480601   21531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:47:05.480639   21531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:47:05.480647   21531 certs.go:256] generating profile certs ...
	I0404 21:47:05.480742   21531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:47:05.480776   21531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486
	I0404 21:47:05.480797   21531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.217 192.168.39.254]
	I0404 21:47:05.603531   21531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 ...
	I0404 21:47:05.603568   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486: {Name:mk0cc3bbe2d9482aa4cd27d58f26cfde4dced9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.603784   21531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486 ...
	I0404 21:47:05.603813   21531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486: {Name:mk40ea018c5e3d70413a022d8b7dd05636971c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:47:05.603934   21531 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.d2808486 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:47:05.604067   21531 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.d2808486 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:47:05.604218   21531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:47:05.604233   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:47:05.604247   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:47:05.604257   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:47:05.604270   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:47:05.604285   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:47:05.604298   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:47:05.604309   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:47:05.604322   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:47:05.604411   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:47:05.604442   21531 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:47:05.604450   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:47:05.604470   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:47:05.604492   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:47:05.604515   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:47:05.604551   21531 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:47:05.604576   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:47:05.604591   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:05.604603   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:47:05.604632   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:47:05.608137   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:05.608594   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:47:05.608624   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:05.608848   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:47:05.609053   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:47:05.609215   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:47:05.609485   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:47:05.688518   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0404 21:47:05.694369   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0404 21:47:05.707260   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0404 21:47:05.713260   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0404 21:47:05.726270   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0404 21:47:05.731336   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0404 21:47:05.743032   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0404 21:47:05.747381   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0404 21:47:05.759932   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0404 21:47:05.765393   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0404 21:47:05.779336   21531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0404 21:47:05.785583   21531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0404 21:47:05.801216   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:47:05.830450   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:47:05.858543   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:47:05.885952   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:47:05.915827   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0404 21:47:05.945702   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 21:47:05.973323   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:47:05.999777   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:47:06.027485   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:47:06.054894   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:47:06.080707   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:47:06.112038   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0404 21:47:06.130812   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0404 21:47:06.149359   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0404 21:47:06.168517   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0404 21:47:06.187518   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0404 21:47:06.206356   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0404 21:47:06.226931   21531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0404 21:47:06.244924   21531 ssh_runner.go:195] Run: openssl version
	I0404 21:47:06.250867   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:47:06.261977   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.266832   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.266893   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:47:06.273526   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:47:06.286438   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:47:06.298083   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.303030   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.303083   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:47:06.308949   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:47:06.320340   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:47:06.331957   21531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.337071   21531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.337135   21531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:47:06.343633   21531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:47:06.355323   21531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:47:06.359818   21531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 21:47:06.359868   21531 kubeadm.go:928] updating node {m03 192.168.39.217 8443 v1.29.3 crio true true} ...
	I0404 21:47:06.359958   21531 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:47:06.359992   21531 kube-vip.go:111] generating kube-vip config ...
	I0404 21:47:06.360035   21531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:47:06.383555   21531 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:47:06.383629   21531 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:47:06.383703   21531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:47:06.405837   21531 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0404 21:47:06.405891   21531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0404 21:47:06.418113   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0404 21:47:06.418137   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:47:06.418113   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0404 21:47:06.418118   21531 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0404 21:47:06.418181   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:47:06.418186   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:47:06.418273   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0404 21:47:06.418202   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0404 21:47:06.424007   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0404 21:47:06.424036   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0404 21:47:06.468728   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0404 21:47:06.468755   21531 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:47:06.468767   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0404 21:47:06.468872   21531 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0404 21:47:06.515419   21531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0404 21:47:06.515459   21531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0404 21:47:07.359563   21531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0404 21:47:07.370631   21531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0404 21:47:07.391971   21531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:47:07.412735   21531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:47:07.433476   21531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:47:07.438016   21531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 21:47:07.451197   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:07.598119   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:47:07.617905   21531 host.go:66] Checking if "ha-454952" exists ...
	I0404 21:47:07.618256   21531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:47:07.618309   21531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:47:07.634014   21531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0404 21:47:07.634519   21531 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:47:07.634993   21531 main.go:141] libmachine: Using API Version  1
	I0404 21:47:07.635011   21531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:47:07.635398   21531 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:47:07.635653   21531 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:47:07.635810   21531 start.go:316] joinCluster: &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:47:07.635985   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0404 21:47:07.636014   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:47:07.638766   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:07.639250   21531 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:47:07.639283   21531 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:47:07.639408   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:47:07.639586   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:47:07.639761   21531 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:47:07.639918   21531 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:47:07.815167   21531 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:47:07.815224   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pn74p.cie5sg4qa194aihi --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m03 --control-plane --apiserver-advertise-address=192.168.39.217 --apiserver-bind-port=8443"
	I0404 21:47:36.022831   21531 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pn74p.cie5sg4qa194aihi --discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-454952-m03 --control-plane --apiserver-advertise-address=192.168.39.217 --apiserver-bind-port=8443": (28.207584886s)
	I0404 21:47:36.022867   21531 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0404 21:47:36.457236   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-454952-m03 minikube.k8s.io/updated_at=2024_04_04T21_47_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=ha-454952 minikube.k8s.io/primary=false
	I0404 21:47:36.597917   21531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-454952-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0404 21:47:36.708053   21531 start.go:318] duration metric: took 29.072241272s to joinCluster
	I0404 21:47:36.708112   21531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 21:47:36.709890   21531 out.go:177] * Verifying Kubernetes components...
	I0404 21:47:36.708439   21531 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:47:36.711385   21531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:47:37.022947   21531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:47:37.087214   21531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:47:37.087547   21531 kapi.go:59] client config for ha-454952: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0404 21:47:37.087629   21531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.13:8443
	I0404 21:47:37.087891   21531 node_ready.go:35] waiting up to 6m0s for node "ha-454952-m03" to be "Ready" ...
	I0404 21:47:37.087995   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:37.088006   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:37.088016   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:37.088026   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:37.093468   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:47:37.588763   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:37.588786   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:37.588797   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:37.588806   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:37.593379   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:38.088873   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:38.088899   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:38.088911   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:38.088917   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:38.093168   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:38.588850   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:38.588878   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:38.588888   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:38.588893   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:38.593483   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:39.088168   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:39.088189   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:39.088197   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:39.088201   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:39.092598   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:39.093497   21531 node_ready.go:53] node "ha-454952-m03" has status "Ready":"False"
	I0404 21:47:39.588772   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:39.588793   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:39.588800   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:39.588805   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:39.592822   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:40.088570   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:40.088616   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:40.088627   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:40.088633   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:40.092576   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:40.588369   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:40.588390   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:40.588397   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:40.588401   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:40.592489   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:41.088719   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:41.088740   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:41.088749   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:41.088753   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:41.093469   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:41.094277   21531 node_ready.go:53] node "ha-454952-m03" has status "Ready":"False"
	I0404 21:47:41.588611   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:41.588635   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:41.588646   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:41.588651   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:41.592703   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.088660   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.088683   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.088691   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.088696   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.093144   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.588673   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.588709   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.588720   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.588726   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.593147   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.593880   21531 node_ready.go:49] node "ha-454952-m03" has status "Ready":"True"
	I0404 21:47:42.593907   21531 node_ready.go:38] duration metric: took 5.505995976s for node "ha-454952-m03" to be "Ready" ...
	I0404 21:47:42.593918   21531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:47:42.593994   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:42.594008   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.594019   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.594025   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.601196   21531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0404 21:47:42.609597   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.609700   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-9qsz7
	I0404 21:47:42.609717   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.609727   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.609735   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.613047   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:42.613723   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.613736   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.613744   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.613748   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.616436   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.616963   21531 pod_ready.go:92] pod "coredns-76f75df574-9qsz7" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.616978   21531 pod_ready.go:81] duration metric: took 7.352588ms for pod "coredns-76f75df574-9qsz7" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.616987   21531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.617030   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hsdfw
	I0404 21:47:42.617037   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.617044   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.617050   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.619751   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.620582   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.620604   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.620611   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.620624   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.623245   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.623643   21531 pod_ready.go:92] pod "coredns-76f75df574-hsdfw" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.623659   21531 pod_ready.go:81] duration metric: took 6.666239ms for pod "coredns-76f75df574-hsdfw" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.623668   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.623709   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952
	I0404 21:47:42.623717   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.623723   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.623727   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.626447   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.626937   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:42.626950   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.626957   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.626962   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.629416   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.629901   21531 pod_ready.go:92] pod "etcd-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.629916   21531 pod_ready.go:81] duration metric: took 6.242973ms for pod "etcd-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.629925   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.629975   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m02
	I0404 21:47:42.629983   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.629990   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.629995   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.633192   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:42.633942   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:42.633959   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.633968   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.633976   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.636510   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:42.636957   21531 pod_ready.go:92] pod "etcd-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:42.636970   21531 pod_ready.go:81] duration metric: took 7.039766ms for pod "etcd-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.636981   21531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:42.789114   21531 request.go:629] Waited for 152.070592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:42.789163   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:42.789169   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.789176   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.789181   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.793499   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:42.989502   21531 request.go:629] Waited for 195.358854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.989578   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:42.989587   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:42.989597   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:42.989602   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:42.994226   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.189189   21531 request.go:629] Waited for 51.228709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.189245   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.189251   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.189261   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.189265   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.193616   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.388827   21531 request.go:629] Waited for 194.308739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.388890   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.388898   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.388908   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.388915   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.393180   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.637281   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:43.637310   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.637321   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.637328   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.641617   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:43.788932   21531 request.go:629] Waited for 146.432841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.789007   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:43.789024   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:43.789032   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:43.789036   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:43.793632   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:44.137617   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:44.137639   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.137647   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.137652   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.141797   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:44.188968   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:44.188989   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.188997   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.189000   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.192891   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.637905   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:44.637926   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.637933   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.637937   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.641521   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.642196   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:44.642216   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:44.642226   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:44.642232   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:44.645544   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:44.646190   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:45.137373   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:45.137408   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.137429   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.137434   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.141414   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:45.142207   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:45.142220   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.142226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.142231   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.145713   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:45.637644   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:45.637669   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.637679   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.637683   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.641840   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:45.642706   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:45.642723   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:45.642734   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:45.642741   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:45.645561   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:46.137543   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:46.137566   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.137573   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.137577   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.141603   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:46.142472   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:46.142488   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.142495   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.142498   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.145597   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:46.637662   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:46.637689   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.637697   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.637702   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.642465   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:46.643258   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:46.643274   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:46.643282   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:46.643286   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:46.646810   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:46.647562   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:47.137788   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:47.137808   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.137815   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.137819   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.141794   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:47.142529   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:47.142546   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.142553   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.142559   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.145451   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:47.637232   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:47.637252   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.637259   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.637264   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.641356   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:47.642238   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:47.642254   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:47.642263   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:47.642268   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:47.646935   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:48.137913   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:48.137940   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.137949   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.137959   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.141476   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:48.142143   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:48.142163   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.142173   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.142179   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.145222   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:48.637719   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:48.637745   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.637756   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.637762   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.641979   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:48.642777   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:48.642799   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:48.642808   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:48.642813   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:48.645736   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.138231   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:49.138255   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.138266   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.138271   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.143309   21531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0404 21:47:49.144334   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.144355   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.144367   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.144371   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.147675   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.148667   21531 pod_ready.go:102] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"False"
	I0404 21:47:49.638112   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/etcd-ha-454952-m03
	I0404 21:47:49.638216   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.638235   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.638256   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.641823   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.642752   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.642772   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.642783   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.642788   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.645830   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.646281   21531 pod_ready.go:92] pod "etcd-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.646299   21531 pod_ready.go:81] duration metric: took 7.009306934s for pod "etcd-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.646325   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.646403   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952
	I0404 21:47:49.646412   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.646422   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.646430   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.649330   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.649965   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:49.649980   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.650003   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.650011   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.652978   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.653544   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.653563   21531 pod_ready.go:81] duration metric: took 7.226681ms for pod "kube-apiserver-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.653589   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.653671   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m02
	I0404 21:47:49.653681   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.653691   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.653698   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.656742   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.657314   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:49.657331   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.657342   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.657347   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.660256   21531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0404 21:47:49.660788   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.660804   21531 pod_ready.go:81] duration metric: took 7.204956ms for pod "kube-apiserver-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.660813   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.660858   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-454952-m03
	I0404 21:47:49.660866   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.660872   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.660876   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.664699   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.788774   21531 request.go:629] Waited for 122.730778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.788824   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:49.788837   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.788860   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.788868   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.792815   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:49.793287   21531 pod_ready.go:92] pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:49.793312   21531 pod_ready.go:81] duration metric: took 132.491239ms for pod "kube-apiserver-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.793326   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:49.988698   21531 request.go:629] Waited for 195.289882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:47:49.988817   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952
	I0404 21:47:49.988824   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:49.988837   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:49.988842   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:49.992748   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:50.188695   21531 request.go:629] Waited for 195.268681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:50.188761   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:50.188766   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.188773   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.188785   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.193289   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.193835   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.193870   21531 pod_ready.go:81] duration metric: took 400.534499ms for pod "kube-controller-manager-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.193884   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.389244   21531 request.go:629] Waited for 195.275135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:47:50.389344   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m02
	I0404 21:47:50.389352   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.389363   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.389381   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.393830   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.588784   21531 request.go:629] Waited for 193.944084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:50.588873   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:50.588888   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.588898   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.588908   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.593077   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.593723   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.593740   21531 pod_ready.go:81] duration metric: took 399.848828ms for pod "kube-controller-manager-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.593749   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.788930   21531 request.go:629] Waited for 195.126625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m03
	I0404 21:47:50.788996   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-454952-m03
	I0404 21:47:50.789004   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.789014   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.789018   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.793082   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.989317   21531 request.go:629] Waited for 195.402098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:50.989393   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:50.989398   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:50.989405   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:50.989409   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:50.993530   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:50.994104   21531 pod_ready.go:92] pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:50.994127   21531 pod_ready.go:81] duration metric: took 400.370156ms for pod "kube-controller-manager-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:50.994142   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.189151   21531 request.go:629] Waited for 194.949221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:47:51.189217   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nkxm
	I0404 21:47:51.189225   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.189235   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.189246   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.193508   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.388819   21531 request.go:629] Waited for 194.281073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:51.388882   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:51.388898   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.388912   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.388919   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.392793   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:51.393464   21531 pod_ready.go:92] pod "kube-proxy-6nkxm" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:51.393484   21531 pod_ready.go:81] duration metric: took 399.334643ms for pod "kube-proxy-6nkxm" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.393494   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fl4jh" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.589561   21531 request.go:629] Waited for 196.010357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fl4jh
	I0404 21:47:51.589644   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fl4jh
	I0404 21:47:51.589650   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.589658   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.589662   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.594586   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.789678   21531 request.go:629] Waited for 194.375907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:51.789737   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:51.789743   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.789750   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.789754   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.793886   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:51.794324   21531 pod_ready.go:92] pod "kube-proxy-fl4jh" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:51.794343   21531 pod_ready.go:81] duration metric: took 400.842302ms for pod "kube-proxy-fl4jh" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.794353   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:51.989491   21531 request.go:629] Waited for 195.06636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:47:51.989597   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjvm9
	I0404 21:47:51.989616   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:51.989631   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:51.989640   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:51.994034   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.189045   21531 request.go:629] Waited for 194.367312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.189112   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.189118   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.189128   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.189133   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.193117   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:52.193745   21531 pod_ready.go:92] pod "kube-proxy-gjvm9" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.193765   21531 pod_ready.go:81] duration metric: took 399.404583ms for pod "kube-proxy-gjvm9" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.193778   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.388706   21531 request.go:629] Waited for 194.860122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:47:52.388836   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952
	I0404 21:47:52.388844   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.388856   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.388904   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.393000   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.588978   21531 request.go:629] Waited for 195.367456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.589030   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952
	I0404 21:47:52.589036   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.589049   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.589055   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.593712   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.594749   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.594775   21531 pod_ready.go:81] duration metric: took 400.981465ms for pod "kube-scheduler-ha-454952" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.594788   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.789149   21531 request.go:629] Waited for 194.286662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:47:52.789212   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m02
	I0404 21:47:52.789218   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.789225   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.789230   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.793336   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:52.989327   21531 request.go:629] Waited for 195.256576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:52.989402   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m02
	I0404 21:47:52.989413   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:52.989422   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:52.989428   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:52.993245   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:52.993935   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:52.993957   21531 pod_ready.go:81] duration metric: took 399.160574ms for pod "kube-scheduler-ha-454952-m02" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:52.993970   21531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:53.189075   21531 request.go:629] Waited for 195.01053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m03
	I0404 21:47:53.189130   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-454952-m03
	I0404 21:47:53.189135   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.189142   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.189147   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.193145   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:53.389470   21531 request.go:629] Waited for 195.359511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:53.389548   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes/ha-454952-m03
	I0404 21:47:53.389560   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.389569   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.389580   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.393665   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:53.394478   21531 pod_ready.go:92] pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace has status "Ready":"True"
	I0404 21:47:53.394497   21531 pod_ready.go:81] duration metric: took 400.519758ms for pod "kube-scheduler-ha-454952-m03" in "kube-system" namespace to be "Ready" ...
	I0404 21:47:53.394508   21531 pod_ready.go:38] duration metric: took 10.800579463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 21:47:53.394523   21531 api_server.go:52] waiting for apiserver process to appear ...
	I0404 21:47:53.394572   21531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 21:47:53.410622   21531 api_server.go:72] duration metric: took 16.702457623s to wait for apiserver process to appear ...
	I0404 21:47:53.410646   21531 api_server.go:88] waiting for apiserver healthz status ...
	I0404 21:47:53.410663   21531 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0404 21:47:53.415122   21531 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0404 21:47:53.415197   21531 round_trippers.go:463] GET https://192.168.39.13:8443/version
	I0404 21:47:53.415205   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.415216   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.415226   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.416582   21531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0404 21:47:53.416723   21531 api_server.go:141] control plane version: v1.29.3
	I0404 21:47:53.416747   21531 api_server.go:131] duration metric: took 6.093013ms to wait for apiserver health ...
	I0404 21:47:53.416781   21531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 21:47:53.589448   21531 request.go:629] Waited for 172.559488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.589502   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.589514   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.589524   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.589530   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.598660   21531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0404 21:47:53.605181   21531 system_pods.go:59] 24 kube-system pods found
	I0404 21:47:53.605213   21531 system_pods.go:61] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:47:53.605220   21531 system_pods.go:61] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:47:53.605225   21531 system_pods.go:61] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:47:53.605230   21531 system_pods.go:61] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:47:53.605233   21531 system_pods.go:61] "etcd-ha-454952-m03" [d2982156-d120-43d3-baf6-853acc915bb8] Running
	I0404 21:47:53.605238   21531 system_pods.go:61] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:47:53.605242   21531 system_pods.go:61] "kindnet-7v9fp" [9bf17455-7a45-4fbf-82d2-55bebd46ee2a] Running
	I0404 21:47:53.605247   21531 system_pods.go:61] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:47:53.605250   21531 system_pods.go:61] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:47:53.605255   21531 system_pods.go:61] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:47:53.605260   21531 system_pods.go:61] "kube-apiserver-ha-454952-m03" [80a7d0c0-874f-47e4-ab91-b40d5d89e741] Running
	I0404 21:47:53.605266   21531 system_pods.go:61] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:47:53.605273   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:47:53.605279   21531 system_pods.go:61] "kube-controller-manager-ha-454952-m03" [f9ec87de-84d2-4186-a4c3-71fe2e149fd1] Running
	I0404 21:47:53.605285   21531 system_pods.go:61] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:47:53.605290   21531 system_pods.go:61] "kube-proxy-fl4jh" [77c75925-e886-40ca-9db8-0116823489df] Running
	I0404 21:47:53.605295   21531 system_pods.go:61] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:47:53.605300   21531 system_pods.go:61] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:47:53.605309   21531 system_pods.go:61] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:47:53.605315   21531 system_pods.go:61] "kube-scheduler-ha-454952-m03" [c0e524d7-282e-4ec1-aee3-1e52867895cc] Running
	I0404 21:47:53.605323   21531 system_pods.go:61] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:47:53.605329   21531 system_pods.go:61] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:47:53.605337   21531 system_pods.go:61] "kube-vip-ha-454952-m03" [db7471a2-4620-4872-ab69-2a4722e7980a] Running
	I0404 21:47:53.605343   21531 system_pods.go:61] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:47:53.605351   21531 system_pods.go:74] duration metric: took 188.55864ms to wait for pod list to return data ...
	I0404 21:47:53.605363   21531 default_sa.go:34] waiting for default service account to be created ...
	I0404 21:47:53.788769   21531 request.go:629] Waited for 183.337016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:47:53.788822   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/default/serviceaccounts
	I0404 21:47:53.788828   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.788835   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.788839   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.792760   21531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0404 21:47:53.792888   21531 default_sa.go:45] found service account: "default"
	I0404 21:47:53.792908   21531 default_sa.go:55] duration metric: took 187.534022ms for default service account to be created ...
	I0404 21:47:53.792922   21531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 21:47:53.989300   21531 request.go:629] Waited for 196.315146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.989350   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/namespaces/kube-system/pods
	I0404 21:47:53.989355   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:53.989362   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:53.989366   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:53.997538   21531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0404 21:47:54.004474   21531 system_pods.go:86] 24 kube-system pods found
	I0404 21:47:54.004505   21531 system_pods.go:89] "coredns-76f75df574-9qsz7" [5af3d10e-47b7-439c-80e3-8ee328d87f16] Running
	I0404 21:47:54.004510   21531 system_pods.go:89] "coredns-76f75df574-hsdfw" [e0384e31-ec4b-4b09-b387-5bce7a36b688] Running
	I0404 21:47:54.004515   21531 system_pods.go:89] "etcd-ha-454952" [d3885d9d-9e4a-4ebb-9bbb-85d1fc88519a] Running
	I0404 21:47:54.004519   21531 system_pods.go:89] "etcd-ha-454952-m02" [a84a3d55-0e63-4944-8368-141d61a3dfdd] Running
	I0404 21:47:54.004523   21531 system_pods.go:89] "etcd-ha-454952-m03" [d2982156-d120-43d3-baf6-853acc915bb8] Running
	I0404 21:47:54.004527   21531 system_pods.go:89] "kindnet-7c9dv" [044a24e9-2851-47a3-be58-29ecfae2f0fb] Running
	I0404 21:47:54.004531   21531 system_pods.go:89] "kindnet-7v9fp" [9bf17455-7a45-4fbf-82d2-55bebd46ee2a] Running
	I0404 21:47:54.004536   21531 system_pods.go:89] "kindnet-v8wv6" [44250298-dce4-4e12-88c2-e347b4a63711] Running
	I0404 21:47:54.004540   21531 system_pods.go:89] "kube-apiserver-ha-454952" [93735313-30dc-4c96-b847-a6119cf400c8] Running
	I0404 21:47:54.004545   21531 system_pods.go:89] "kube-apiserver-ha-454952-m02" [b1abcf64-e80a-4e10-b069-c7d6827bda4a] Running
	I0404 21:47:54.004549   21531 system_pods.go:89] "kube-apiserver-ha-454952-m03" [80a7d0c0-874f-47e4-ab91-b40d5d89e741] Running
	I0404 21:47:54.004554   21531 system_pods.go:89] "kube-controller-manager-ha-454952" [17a4bba8-4424-4a4c-b5d4-88693cb013b6] Running
	I0404 21:47:54.004558   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m02" [29adfa13-fd82-4b6b-a00a-3ecf9fb575de] Running
	I0404 21:47:54.004562   21531 system_pods.go:89] "kube-controller-manager-ha-454952-m03" [f9ec87de-84d2-4186-a4c3-71fe2e149fd1] Running
	I0404 21:47:54.004566   21531 system_pods.go:89] "kube-proxy-6nkxm" [67f1a256-ee89-4563-ab19-4a75f01d2c3a] Running
	I0404 21:47:54.004571   21531 system_pods.go:89] "kube-proxy-fl4jh" [77c75925-e886-40ca-9db8-0116823489df] Running
	I0404 21:47:54.004574   21531 system_pods.go:89] "kube-proxy-gjvm9" [60759cb6-a394-4e3e-a19e-f9b7c92a19db] Running
	I0404 21:47:54.004582   21531 system_pods.go:89] "kube-scheduler-ha-454952" [9ad2aa31-283d-47d1-a7ff-3c13974f4ba8] Running
	I0404 21:47:54.004586   21531 system_pods.go:89] "kube-scheduler-ha-454952-m02" [89c4a892-3be4-41a7-aa44-b19230ff2515] Running
	I0404 21:47:54.004590   21531 system_pods.go:89] "kube-scheduler-ha-454952-m03" [c0e524d7-282e-4ec1-aee3-1e52867895cc] Running
	I0404 21:47:54.004594   21531 system_pods.go:89] "kube-vip-ha-454952" [87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b] Running
	I0404 21:47:54.004600   21531 system_pods.go:89] "kube-vip-ha-454952-m02" [2632721a-d7cc-40f1-988d-ab0aa8cfe79a] Running
	I0404 21:47:54.004603   21531 system_pods.go:89] "kube-vip-ha-454952-m03" [db7471a2-4620-4872-ab69-2a4722e7980a] Running
	I0404 21:47:54.004610   21531 system_pods.go:89] "storage-provisioner" [c8531ddb-fa9d-4efe-91cc-072e75a5897d] Running
	I0404 21:47:54.004616   21531 system_pods.go:126] duration metric: took 211.688695ms to wait for k8s-apps to be running ...
	I0404 21:47:54.004625   21531 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 21:47:54.004667   21531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 21:47:54.021779   21531 system_svc.go:56] duration metric: took 17.142344ms WaitForService to wait for kubelet
	I0404 21:47:54.021813   21531 kubeadm.go:576] duration metric: took 17.31364983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:47:54.021832   21531 node_conditions.go:102] verifying NodePressure condition ...
	I0404 21:47:54.189232   21531 request.go:629] Waited for 167.316748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.13:8443/api/v1/nodes
	I0404 21:47:54.189280   21531 round_trippers.go:463] GET https://192.168.39.13:8443/api/v1/nodes
	I0404 21:47:54.189285   21531 round_trippers.go:469] Request Headers:
	I0404 21:47:54.189293   21531 round_trippers.go:473]     Accept: application/json, */*
	I0404 21:47:54.189297   21531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0404 21:47:54.193610   21531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0404 21:47:54.194644   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194665   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194675   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194678   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194681   21531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 21:47:54.194684   21531 node_conditions.go:123] node cpu capacity is 2
	I0404 21:47:54.194688   21531 node_conditions.go:105] duration metric: took 172.852606ms to run NodePressure ...
	I0404 21:47:54.194699   21531 start.go:240] waiting for startup goroutines ...
	I0404 21:47:54.194717   21531 start.go:254] writing updated cluster config ...
	I0404 21:47:54.195015   21531 ssh_runner.go:195] Run: rm -f paused
	I0404 21:47:54.247265   21531 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 21:47:54.249516   21531 out.go:177] * Done! kubectl is now configured to use "ha-454952" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.134046443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267536133979249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4938ab3-e263-4fe3-8ab8-c779dbedbc0c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.134975265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00b2bf3c-51f9-4643-8665-c9556e7fcb74 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.135032960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00b2bf3c-51f9-4643-8665-c9556e7fcb74 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.135310077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00b2bf3c-51f9-4643-8665-c9556e7fcb74 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.179219103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=949fe086-68cf-4b02-be72-3f0b875bf873 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.179807065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=949fe086-68cf-4b02-be72-3f0b875bf873 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.181191852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb29aa85-6b34-4597-be6b-87bfb762b135 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.181614250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267536181595198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb29aa85-6b34-4597-be6b-87bfb762b135 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.182449339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad5247d4-6191-470b-bef2-130ad4ad8f9b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.182656900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad5247d4-6191-470b-bef2-130ad4ad8f9b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.183107601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad5247d4-6191-470b-bef2-130ad4ad8f9b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.229966023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98e8bc5a-df5a-47f3-81de-a919cde3c185 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.230041349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98e8bc5a-df5a-47f3-81de-a919cde3c185 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.231327694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84567066-fb28-4d07-9161-6e1771e77bff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.231985617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267536231957608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84567066-fb28-4d07-9161-6e1771e77bff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.232577844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50909593-8d65-4619-96b1-e96a6f4a9806 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.232657606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50909593-8d65-4619-96b1-e96a6f4a9806 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.233077527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50909593-8d65-4619-96b1-e96a6f4a9806 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.275143845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cb8e4f9-e7b9-4d83-9551-74c5037954f9 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.275269579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cb8e4f9-e7b9-4d83-9551-74c5037954f9 name=/runtime.v1.RuntimeService/Version
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.276981948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16d19ae0-5a28-47b3-b8ab-f4f0764407d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.277433768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267536277405459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16d19ae0-5a28-47b3-b8ab-f4f0764407d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.278357872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8de23de7-be28-4225-a45d-edda28b4b978 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.278600382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8de23de7-be28-4225-a45d-edda28b4b978 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:52:16 ha-454952 crio[685]: time="2024-04-04 21:52:16.279229824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267279351337073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f910060a98869ee2c016b8406af4f22a303c1718c8717c07899f29f596b29cb,PodSandboxId:e1823b9750831163e441ebb50f4fbaef84bd4361b664a285dcb8378c4a583aa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267102036847137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101615113809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267101585172807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-e
c4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69,PodSandboxId:90fe92fd101c4d66d5b7a4223cd6ea20254dc790eeedd23b086c451b00534975,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267
099629234265,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267099360228042,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d,PodSandboxId:204ef6b79c8cb0db393523a449d5288bbbbe42f7f7943343747a321776d2dc5a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267082660799590,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8ce5491d2921db8d31119ee2f820a1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65,PodSandboxId:2d41ace5ee35fc68b4b09e8c976e0fcb5e621f13990528f6b247bda12a71bc1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267079720598876,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267079628176165,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1,PodSandboxId:a29d53a59569a4a39c5d5ae9aaf81020cc4c9efa4331c6bcbc49ff108002bf66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267079613888504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267079596265674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8de23de7-be28-4225-a45d-edda28b4b978 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85478f2f51e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   2c8e166c4509c       busybox-7fdf7869d9-q56fw
	8f910060a9886       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   e1823b9750831       storage-provisioner
	2f6afcac0a6b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   b1934889b30c3       coredns-76f75df574-9qsz7
	b3fc8d8ef023d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   0b786dbf91033       coredns-76f75df574-hsdfw
	2a3b245ea3482       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   90fe92fd101c4       kindnet-v8wv6
	90c39a2687464       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   2748de75b7d2d       kube-proxy-gjvm9
	a0c8fa7da2804       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   204ef6b79c8cb       kube-vip-ha-454952
	c3820dd809544       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   2d41ace5ee35f       kube-controller-manager-ha-454952
	e9faec0816d4c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   9f1d5c3d0af96       kube-scheduler-ha-454952
	a94e56804eb2e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   a29d53a59569a       kube-apiserver-ha-454952
	72549bccc4ca2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   92d02e4d213b3       etcd-ha-454952
	
	
	==> coredns [2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f] <==
	[INFO] 10.244.1.2:55731 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112498s
	[INFO] 10.244.1.2:51841 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001879121s
	[INFO] 10.244.2.2:33882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001666s
	[INFO] 10.244.2.2:59301 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003616562s
	[INFO] 10.244.2.2:38692 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000240884s
	[INFO] 10.244.2.2:49348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146448s
	[INFO] 10.244.2.2:48867 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138618s
	[INFO] 10.244.0.4:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070304s
	[INFO] 10.244.1.2:58936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144716s
	[INFO] 10.244.1.2:43170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002050369s
	[INFO] 10.244.1.2:59811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149418s
	[INFO] 10.244.1.2:58173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001389488s
	[INFO] 10.244.1.2:50742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078385s
	[INFO] 10.244.1.2:46973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077499s
	[INFO] 10.244.2.2:43785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153069s
	[INFO] 10.244.2.2:37406 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074939s
	[INFO] 10.244.0.4:41091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141133s
	[INFO] 10.244.0.4:44476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202801s
	[INFO] 10.244.0.4:45234 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104556s
	[INFO] 10.244.1.2:39647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182075s
	[INFO] 10.244.1.2:50588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151414s
	[INFO] 10.244.1.2:41606 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195991s
	[INFO] 10.244.2.2:53483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232191s
	[INFO] 10.244.2.2:60437 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132599s
	[INFO] 10.244.1.2:51965 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166052s
	
	
	==> coredns [b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c] <==
	[INFO] 10.244.2.2:52520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000300772s
	[INFO] 10.244.2.2:56049 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228681s
	[INFO] 10.244.2.2:38128 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003078889s
	[INFO] 10.244.0.4:60519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135291s
	[INFO] 10.244.0.4:43464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002071208s
	[INFO] 10.244.0.4:51293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085331s
	[INFO] 10.244.0.4:55321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087493s
	[INFO] 10.244.0.4:59685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001579648s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157393s
	[INFO] 10.244.0.4:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109886s
	[INFO] 10.244.1.2:59156 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010739s
	[INFO] 10.244.1.2:53747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144738s
	[INFO] 10.244.2.2:48166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144032s
	[INFO] 10.244.2.2:36301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211342s
	[INFO] 10.244.0.4:34383 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072486s
	[INFO] 10.244.1.2:47623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275299s
	[INFO] 10.244.2.2:36199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000346157s
	[INFO] 10.244.2.2:51401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193332s
	[INFO] 10.244.0.4:48691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082711s
	[INFO] 10.244.0.4:37702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047018s
	[INFO] 10.244.0.4:59456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.0.4:56014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070317s
	[INFO] 10.244.1.2:47145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204326s
	[INFO] 10.244.1.2:36898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127022s
	[INFO] 10.244.1.2:42608 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109931s
	
	
	==> describe nodes <==
	Name:               ha-454952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:52:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:48:20 +0000   Thu, 04 Apr 2024 21:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    ha-454952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bcaf06686d84ca785ca1e79fc3ee92b
	  System UUID:                9bcaf066-86d8-4ca7-85ca-1e79fc3ee92b
	  Boot ID:                    00b02ff9-8c43-4004-ab1c-4fcde5b8a674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q56fw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 coredns-76f75df574-9qsz7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m17s
	  kube-system                 coredns-76f75df574-hsdfw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m17s
	  kube-system                 etcd-ha-454952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  kube-system                 kindnet-v8wv6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m18s
	  kube-system                 kube-apiserver-ha-454952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-controller-manager-ha-454952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-proxy-gjvm9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-scheduler-ha-454952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-vip-ha-454952                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m37s (x7 over 7m38s)  kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m37s (x8 over 7m38s)  kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x8 over 7m38s)  kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m30s                  kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s                  kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s                  kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m19s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal  NodeReady                7m15s                  kubelet          Node ha-454952 status is now: NodeReady
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	
	
	Name:               ha-454952-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:46:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:49:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 04 Apr 2024 21:48:22 +0000   Thu, 04 Apr 2024 21:49:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-454952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f458ea60975d458aa9cb6e203993b49a
	  System UUID:                f458ea60-975d-458a-a9cb-6e203993b49a
	  Boot ID:                    45704b3c-2202-4d10-9e3c-5b89634b1116
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rshl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-454952-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m54s
	  kube-system                 kindnet-7c9dv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m56s
	  kube-system                 kube-apiserver-ha-454952-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-controller-manager-ha-454952-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-6nkxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-scheduler-ha-454952-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-vip-ha-454952-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  NodeNotReady             2m32s                  node-controller  Node ha-454952-m02 status is now: NodeNotReady
	
	
	Name:               ha-454952-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_47_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:52:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:48:01 +0000   Thu, 04 Apr 2024 21:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-454952-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b7367a50ec545c4ae6fb446cfb73753
	  System UUID:                4b7367a5-0ec5-45c4-ae6f-b446cfb73753
	  Boot ID:                    2b997353-af0c-4d49-8d13-945875ed8eb6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-8qf48                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-454952-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kindnet-7v9fp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m46s
	  kube-system                 kube-apiserver-ha-454952-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-controller-manager-ha-454952-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-fl4jh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-scheduler-ha-454952-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-vip-ha-454952-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node ha-454952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	
	
	Name:               ha-454952-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_48_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:52:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:49:03 +0000   Thu, 04 Apr 2024 21:48:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-454952-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eaf323303c74873975b4953c592319b
	  System UUID:                0eaf3233-03c7-4873-975b-4953c592319b
	  Boot ID:                    4fc91205-3a73-4a27-9638-4008c1292325
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mmgj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m43s
	  kube-system                 kube-proxy-5j62j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m43s (x2 over 3m43s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x2 over 3m43s)  kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x2 over 3m43s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-454952-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 4 21:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053353] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.565978] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.745346] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.640914] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.710951] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.059484] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060191] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.177107] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.307912] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.603000] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.064613] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478091] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.520027] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.408849] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.092051] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.761594] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 21:46] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3] <==
	{"level":"warn","ts":"2024-04-04T21:52:16.568993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.574149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.579227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.581931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.586184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.605618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.615757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.623287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.626871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.630494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.638622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.646422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.654014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.659995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.664171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.672512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.678344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.683607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.690643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.694509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.697416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.706178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.717429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.724847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:52:16.778213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:52:16 up 8 min,  0 users,  load average: 0.16, 0.46, 0.27
	Linux ha-454952 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a3b245ea3482de34a741540e05e226d95a56b87a5397b1db4fc9cc669a70a69] <==
	I0404 21:51:41.404505       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:51:51.412271       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:51:51.412605       1 main.go:227] handling current node
	I0404 21:51:51.412655       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:51:51.412750       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:51:51.412883       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:51:51.412904       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:51:51.412987       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:51:51.413007       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:52:01.421474       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:52:01.421600       1 main.go:227] handling current node
	I0404 21:52:01.421634       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:52:01.421658       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:52:01.421957       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:52:01.477763       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:52:01.478581       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:52:01.478627       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:52:11.486046       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:52:11.486094       1 main.go:227] handling current node
	I0404 21:52:11.486110       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:52:11.486116       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:52:11.486254       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:52:11.486295       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:52:11.486354       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:52:11.486385       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1] <==
	I0404 21:44:42.975292       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0404 21:44:42.975316       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 21:44:42.992175       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 21:44:42.992224       1 aggregator.go:165] initial CRD sync complete...
	I0404 21:44:42.992231       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 21:44:42.992236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 21:44:42.992243       1 cache.go:39] Caches are synced for autoregister controller
	I0404 21:44:43.010463       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 21:44:43.020528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 21:44:43.882866       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0404 21:44:43.891393       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0404 21:44:43.891436       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 21:44:44.767100       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 21:44:44.818568       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0404 21:44:44.898133       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0404 21:44:44.905347       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0404 21:44:44.919860       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13]
	I0404 21:44:44.921482       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 21:44:44.926803       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0404 21:44:46.513153       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0404 21:44:46.539446       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0404 21:44:46.550519       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0404 21:44:58.606495       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0404 21:44:58.963925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W0404 21:49:14.925858       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.217]
	
	
	==> kube-controller-manager [c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65] <==
	I0404 21:47:56.660364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="43.803µs"
	I0404 21:47:58.588132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.452482ms"
	I0404 21:47:58.588860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="189.895µs"
	I0404 21:47:58.814779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.497163ms"
	I0404 21:47:58.814867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="39.61µs"
	I0404 21:47:59.634010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.507634ms"
	I0404 21:47:59.634872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.366µs"
	E0404 21:48:32.982249       1 certificate_controller.go:146] Sync csr-7qwg4 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-7qwg4": the object has been modified; please apply your changes to the latest version and try again
	E0404 21:48:32.985440       1 certificate_controller.go:146] Sync csr-7qwg4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-7qwg4": the object has been modified; please apply your changes to the latest version and try again
	I0404 21:48:33.275529       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-454952-m04\" does not exist"
	I0404 21:48:33.357859       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rxzk6"
	I0404 21:48:33.359611       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vkhx6"
	I0404 21:48:33.370013       1 range_allocator.go:380] "Set node PodCIDR" node="ha-454952-m04" podCIDRs=["10.244.3.0/24"]
	I0404 21:48:33.503662       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-vkhx6"
	I0404 21:48:33.535295       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-97shf"
	I0404 21:48:33.585806       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-ctxj5"
	E0404 21:48:33.614165       1 daemon_controller.go:326] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2e973454-81e4-41a4-9525-61d5c5586ff2", ResourceVersion:"988", Generation:1, CreationTimestamp:time.Date(2024, time.April, 4, 21, 44, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00100b200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1
, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVol
umeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0020948c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017dc318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVo
lumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:
v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017dc330), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPers
istentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.29.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00100b240)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"ku
be-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a37aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c16b28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", No
deSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00050ed20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil
), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001d4b3e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c16b80)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0404 21:48:33.642548       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-rxzk6"
	I0404 21:48:38.010841       1 event.go:376] "Event occurred" object="ha-454952-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller"
	I0404 21:48:38.027056       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-454952-m04"
	I0404 21:48:43.311557       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-454952-m04"
	I0404 21:49:44.309356       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-454952-m04"
	I0404 21:49:44.360889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="29.759619ms"
	I0404 21:49:44.360993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.841µs"
	
	
	==> kube-proxy [90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05] <==
	I0404 21:44:59.909318       1 server_others.go:72] "Using iptables proxy"
	I0404 21:44:59.936579       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0404 21:44:59.996411       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 21:44:59.996464       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 21:44:59.996479       1 server_others.go:168] "Using iptables Proxier"
	I0404 21:45:00.004335       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 21:45:00.004600       1 server.go:865] "Version info" version="v1.29.3"
	I0404 21:45:00.004636       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:45:00.009109       1 config.go:315] "Starting node config controller"
	I0404 21:45:00.009536       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 21:45:00.019011       1 config.go:188] "Starting service config controller"
	I0404 21:45:00.019046       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 21:45:00.019065       1 config.go:97] "Starting endpoint slice config controller"
	I0404 21:45:00.019069       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 21:45:00.110411       1 shared_informer.go:318] Caches are synced for node config
	I0404 21:45:00.121939       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0404 21:45:00.122095       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048] <==
	E0404 21:44:44.511804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 21:44:44.515508       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0404 21:44:44.515572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0404 21:44:45.956811       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 21:47:55.195896       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="96ea0da6-790d-4093-8ac2-25d90308000e" pod="default/busybox-7fdf7869d9-8qf48" assumedNode="ha-454952-m03" currentNode="ha-454952-m02"
	E0404 21:47:55.220542       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-8qf48\": pod busybox-7fdf7869d9-8qf48 is already assigned to node \"ha-454952-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-8qf48" node="ha-454952-m02"
	E0404 21:47:55.220750       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 96ea0da6-790d-4093-8ac2-25d90308000e(default/busybox-7fdf7869d9-8qf48) was assumed on ha-454952-m02 but assigned to ha-454952-m03"
	E0404 21:47:55.220835       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-8qf48\": pod busybox-7fdf7869d9-8qf48 is already assigned to node \"ha-454952-m03\"" pod="default/busybox-7fdf7869d9-8qf48"
	I0404 21:47:55.220980       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-8qf48" node="ha-454952-m03"
	E0404 21:47:55.274958       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-pcb8c\": pod busybox-7fdf7869d9-pcb8c is already assigned to node \"ha-454952\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-pcb8c" node="ha-454952"
	E0404 21:47:55.275055       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 27044b2b-8296-4cca-811e-4a0584edabbf(default/busybox-7fdf7869d9-pcb8c) wasn't assumed so cannot be forgotten"
	E0404 21:47:55.275105       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-pcb8c\": pod busybox-7fdf7869d9-pcb8c is already assigned to node \"ha-454952\"" pod="default/busybox-7fdf7869d9-pcb8c"
	I0404 21:47:55.275170       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-pcb8c" node="ha-454952"
	E0404 21:48:33.418226       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vkhx6\": pod kindnet-vkhx6 is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vkhx6" node="ha-454952-m04"
	E0404 21:48:33.418326       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 5814bfb4-ad69-4d7b-b7e9-5870b1db6184(kube-system/kindnet-vkhx6) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.418377       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vkhx6\": pod kindnet-vkhx6 is already assigned to node \"ha-454952-m04\"" pod="kube-system/kindnet-vkhx6"
	I0404 21:48:33.418403       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vkhx6" node="ha-454952-m04"
	E0404 21:48:33.418771       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rxzk6\": pod kube-proxy-rxzk6 is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rxzk6" node="ha-454952-m04"
	E0404 21:48:33.418937       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod fe3ae4e5-f3df-4635-8cb3-056592eac2a2(kube-system/kube-proxy-rxzk6) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.418994       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rxzk6\": pod kube-proxy-rxzk6 is already assigned to node \"ha-454952-m04\"" pod="kube-system/kube-proxy-rxzk6"
	I0404 21:48:33.419027       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rxzk6" node="ha-454952-m04"
	E0404 21:48:33.463113       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-97shf\": pod kube-proxy-97shf is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-97shf" node="ha-454952-m04"
	E0404 21:48:33.463268       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 1765307b-c5ff-43e2-909d-b541f9cd6f85(kube-system/kube-proxy-97shf) wasn't assumed so cannot be forgotten"
	E0404 21:48:33.463492       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-97shf\": pod kube-proxy-97shf is already assigned to node \"ha-454952-m04\"" pod="kube-system/kube-proxy-97shf"
	I0404 21:48:33.466246       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-97shf" node="ha-454952-m04"
	
	
	==> kubelet <==
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.631457    1393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4" (OuterVolumeSpecName: "kube-api-access-x6nj4") pod "27044b2b-8296-4cca-811e-4a0584edabbf" (UID: "27044b2b-8296-4cca-811e-4a0584edabbf"). InnerVolumeSpecName "kube-api-access-x6nj4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 04 21:47:55 ha-454952 kubelet[1393]: I0404 21:47:55.722472    1393 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x6nj4\" (UniqueName: \"kubernetes.io/projected/27044b2b-8296-4cca-811e-4a0584edabbf-kube-api-access-x6nj4\") on node \"ha-454952\" DevicePath \"\""
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.622638    1393 topology_manager.go:215] "Topology Admit Handler" podUID="53780518-8100-4f1a-993c-fb9c76dfecb1" podNamespace="default" podName="busybox-7fdf7869d9-q56fw"
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.628154    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmndm\" (UniqueName: \"kubernetes.io/projected/53780518-8100-4f1a-993c-fb9c76dfecb1-kube-api-access-tmndm\") pod \"busybox-7fdf7869d9-q56fw\" (UID: \"53780518-8100-4f1a-993c-fb9c76dfecb1\") " pod="default/busybox-7fdf7869d9-q56fw"
	Apr 04 21:47:56 ha-454952 kubelet[1393]: I0404 21:47:56.708200    1393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27044b2b-8296-4cca-811e-4a0584edabbf" path="/var/lib/kubelet/pods/27044b2b-8296-4cca-811e-4a0584edabbf/volumes"
	Apr 04 21:48:46 ha-454952 kubelet[1393]: E0404 21:48:46.749195    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:48:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:48:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:49:46 ha-454952 kubelet[1393]: E0404 21:49:46.750885    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:49:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:49:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:50:46 ha-454952 kubelet[1393]: E0404 21:50:46.750415    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:50:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:50:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:51:46 ha-454952 kubelet[1393]: E0404 21:51:46.746817    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:51:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:51:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:51:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:51:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-454952 -n ha-454952
helpers_test.go:261: (dbg) Run:  kubectl --context ha-454952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (47.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-454952 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-454952 -v=7 --alsologtostderr
E0404 21:53:09.142474   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:53:36.825606   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:53:50.480188   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-454952 -v=7 --alsologtostderr: exit status 82 (2m2.722820005s)

                                                
                                                
-- stdout --
	* Stopping node "ha-454952-m04"  ...
	* Stopping node "ha-454952-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:52:18.277841   26830 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:52:18.277962   26830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:18.277972   26830 out.go:304] Setting ErrFile to fd 2...
	I0404 21:52:18.277976   26830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:52:18.278217   26830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:52:18.278459   26830 out.go:298] Setting JSON to false
	I0404 21:52:18.278549   26830 mustload.go:65] Loading cluster: ha-454952
	I0404 21:52:18.278909   26830 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:52:18.278995   26830 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:52:18.279164   26830 mustload.go:65] Loading cluster: ha-454952
	I0404 21:52:18.279308   26830 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:52:18.279341   26830 stop.go:39] StopHost: ha-454952-m04
	I0404 21:52:18.279702   26830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:18.279750   26830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:18.294467   26830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0404 21:52:18.294915   26830 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:18.295488   26830 main.go:141] libmachine: Using API Version  1
	I0404 21:52:18.295511   26830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:18.295846   26830 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:18.298862   26830 out.go:177] * Stopping node "ha-454952-m04"  ...
	I0404 21:52:18.300789   26830 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 21:52:18.300816   26830 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:52:18.301022   26830 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 21:52:18.301045   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:52:18.304209   26830 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:18.304609   26830 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:48:19 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:52:18.304638   26830 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:52:18.304742   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:52:18.304889   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:52:18.305031   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:52:18.305186   26830 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:52:18.389517   26830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 21:52:18.445518   26830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 21:52:18.500657   26830 main.go:141] libmachine: Stopping "ha-454952-m04"...
	I0404 21:52:18.500681   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:52:18.502593   26830 main.go:141] libmachine: (ha-454952-m04) Calling .Stop
	I0404 21:52:18.506670   26830 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 0/120
	I0404 21:52:19.508182   26830 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 1/120
	I0404 21:52:20.510480   26830 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:52:20.511933   26830 main.go:141] libmachine: Machine "ha-454952-m04" was stopped.
	I0404 21:52:20.511952   26830 stop.go:75] duration metric: took 2.211164759s to stop
	I0404 21:52:20.511985   26830 stop.go:39] StopHost: ha-454952-m03
	I0404 21:52:20.512307   26830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:52:20.512351   26830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:52:20.527955   26830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0404 21:52:20.528440   26830 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:52:20.528966   26830 main.go:141] libmachine: Using API Version  1
	I0404 21:52:20.528989   26830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:52:20.529349   26830 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:52:20.531898   26830 out.go:177] * Stopping node "ha-454952-m03"  ...
	I0404 21:52:20.533549   26830 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 21:52:20.533575   26830 main.go:141] libmachine: (ha-454952-m03) Calling .DriverName
	I0404 21:52:20.533827   26830 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 21:52:20.533850   26830 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHHostname
	I0404 21:52:20.536913   26830 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:20.537371   26830 main.go:141] libmachine: (ha-454952-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:12:2d", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:46:54 +0000 UTC Type:0 Mac:52:54:00:9a:12:2d Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-454952-m03 Clientid:01:52:54:00:9a:12:2d}
	I0404 21:52:20.537396   26830 main.go:141] libmachine: (ha-454952-m03) DBG | domain ha-454952-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:9a:12:2d in network mk-ha-454952
	I0404 21:52:20.537515   26830 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHPort
	I0404 21:52:20.537703   26830 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHKeyPath
	I0404 21:52:20.537873   26830 main.go:141] libmachine: (ha-454952-m03) Calling .GetSSHUsername
	I0404 21:52:20.538015   26830 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m03/id_rsa Username:docker}
	I0404 21:52:20.624397   26830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 21:52:20.679456   26830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 21:52:20.735268   26830 main.go:141] libmachine: Stopping "ha-454952-m03"...
	I0404 21:52:20.735308   26830 main.go:141] libmachine: (ha-454952-m03) Calling .GetState
	I0404 21:52:20.736967   26830 main.go:141] libmachine: (ha-454952-m03) Calling .Stop
	I0404 21:52:20.740968   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 0/120
	I0404 21:52:21.743003   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 1/120
	I0404 21:52:22.744653   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 2/120
	I0404 21:52:23.746497   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 3/120
	I0404 21:52:24.748166   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 4/120
	I0404 21:52:25.750022   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 5/120
	I0404 21:52:26.751860   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 6/120
	I0404 21:52:27.753863   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 7/120
	I0404 21:52:28.755670   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 8/120
	I0404 21:52:29.757314   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 9/120
	I0404 21:52:30.759355   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 10/120
	I0404 21:52:31.760968   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 11/120
	I0404 21:52:32.762649   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 12/120
	I0404 21:52:33.764311   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 13/120
	I0404 21:52:34.765634   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 14/120
	I0404 21:52:35.767519   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 15/120
	I0404 21:52:36.769204   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 16/120
	I0404 21:52:37.770650   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 17/120
	I0404 21:52:38.772034   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 18/120
	I0404 21:52:39.773801   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 19/120
	I0404 21:52:40.775284   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 20/120
	I0404 21:52:41.777091   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 21/120
	I0404 21:52:42.778877   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 22/120
	I0404 21:52:43.780568   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 23/120
	I0404 21:52:44.782014   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 24/120
	I0404 21:52:45.784162   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 25/120
	I0404 21:52:46.786019   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 26/120
	I0404 21:52:47.787528   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 27/120
	I0404 21:52:48.788980   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 28/120
	I0404 21:52:49.790503   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 29/120
	I0404 21:52:50.793124   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 30/120
	I0404 21:52:51.794792   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 31/120
	I0404 21:52:52.796780   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 32/120
	I0404 21:52:53.798740   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 33/120
	I0404 21:52:54.800597   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 34/120
	I0404 21:52:55.802377   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 35/120
	I0404 21:52:56.803801   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 36/120
	I0404 21:52:57.805190   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 37/120
	I0404 21:52:58.806523   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 38/120
	I0404 21:52:59.807988   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 39/120
	I0404 21:53:00.810252   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 40/120
	I0404 21:53:01.811614   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 41/120
	I0404 21:53:02.813129   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 42/120
	I0404 21:53:03.814353   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 43/120
	I0404 21:53:04.815868   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 44/120
	I0404 21:53:05.817973   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 45/120
	I0404 21:53:06.819592   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 46/120
	I0404 21:53:07.821080   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 47/120
	I0404 21:53:08.822869   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 48/120
	I0404 21:53:09.824342   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 49/120
	I0404 21:53:10.826481   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 50/120
	I0404 21:53:11.827872   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 51/120
	I0404 21:53:12.829876   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 52/120
	I0404 21:53:13.831565   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 53/120
	I0404 21:53:14.833158   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 54/120
	I0404 21:53:15.835629   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 55/120
	I0404 21:53:16.837177   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 56/120
	I0404 21:53:17.838638   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 57/120
	I0404 21:53:18.840115   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 58/120
	I0404 21:53:19.841660   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 59/120
	I0404 21:53:20.844040   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 60/120
	I0404 21:53:21.845818   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 61/120
	I0404 21:53:22.847245   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 62/120
	I0404 21:53:23.848735   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 63/120
	I0404 21:53:24.850094   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 64/120
	I0404 21:53:25.851739   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 65/120
	I0404 21:53:26.853197   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 66/120
	I0404 21:53:27.854791   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 67/120
	I0404 21:53:28.856621   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 68/120
	I0404 21:53:29.858805   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 69/120
	I0404 21:53:30.860777   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 70/120
	I0404 21:53:31.862509   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 71/120
	I0404 21:53:32.863994   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 72/120
	I0404 21:53:33.865483   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 73/120
	I0404 21:53:34.867113   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 74/120
	I0404 21:53:35.869409   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 75/120
	I0404 21:53:36.870848   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 76/120
	I0404 21:53:37.872374   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 77/120
	I0404 21:53:38.873691   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 78/120
	I0404 21:53:39.875400   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 79/120
	I0404 21:53:40.877482   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 80/120
	I0404 21:53:41.878785   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 81/120
	I0404 21:53:42.880246   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 82/120
	I0404 21:53:43.881397   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 83/120
	I0404 21:53:44.883154   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 84/120
	I0404 21:53:45.884727   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 85/120
	I0404 21:53:46.886117   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 86/120
	I0404 21:53:47.887396   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 87/120
	I0404 21:53:48.888697   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 88/120
	I0404 21:53:49.890165   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 89/120
	I0404 21:53:50.891862   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 90/120
	I0404 21:53:51.893796   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 91/120
	I0404 21:53:52.894984   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 92/120
	I0404 21:53:53.896242   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 93/120
	I0404 21:53:54.897805   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 94/120
	I0404 21:53:55.899478   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 95/120
	I0404 21:53:56.900971   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 96/120
	I0404 21:53:57.902481   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 97/120
	I0404 21:53:58.903860   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 98/120
	I0404 21:53:59.905447   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 99/120
	I0404 21:54:00.907246   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 100/120
	I0404 21:54:01.908675   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 101/120
	I0404 21:54:02.910237   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 102/120
	I0404 21:54:03.911458   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 103/120
	I0404 21:54:04.913010   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 104/120
	I0404 21:54:05.914569   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 105/120
	I0404 21:54:06.916709   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 106/120
	I0404 21:54:07.918487   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 107/120
	I0404 21:54:08.920073   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 108/120
	I0404 21:54:09.921680   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 109/120
	I0404 21:54:10.923881   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 110/120
	I0404 21:54:11.925311   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 111/120
	I0404 21:54:12.926683   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 112/120
	I0404 21:54:13.928194   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 113/120
	I0404 21:54:14.929551   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 114/120
	I0404 21:54:15.931232   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 115/120
	I0404 21:54:16.932635   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 116/120
	I0404 21:54:17.934269   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 117/120
	I0404 21:54:18.935844   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 118/120
	I0404 21:54:19.937155   26830 main.go:141] libmachine: (ha-454952-m03) Waiting for machine to stop 119/120
	I0404 21:54:20.937800   26830 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 21:54:20.937859   26830 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0404 21:54:20.940083   26830 out.go:177] 
	W0404 21:54:20.942125   26830 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0404 21:54:20.942150   26830 out.go:239] * 
	* 
	W0404 21:54:20.944272   26830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 21:54:20.945945   26830 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-454952 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-454952 --wait=true -v=7 --alsologtostderr
E0404 21:58:09.142962   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-454952 --wait=true -v=7 --alsologtostderr: (3m59.404287589s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-454952
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-454952 -n ha-454952
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-454952 logs -n 25: (1.953741289s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:48 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m04 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp testdata/cp-test.txt                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m04_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03:/home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m03 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-454952 node stop m02 -v=7                                                     | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-454952 node start m02 -v=7                                                    | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-454952 -v=7                                                           | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-454952 -v=7                                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-454952 --wait=true -v=7                                                    | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:54 UTC | 04 Apr 24 21:58 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-454952                                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:58 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:54:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:54:21.004101   27181 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:54:21.004246   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:54:21.004256   27181 out.go:304] Setting ErrFile to fd 2...
	I0404 21:54:21.004260   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:54:21.004458   27181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:54:21.004999   27181 out.go:298] Setting JSON to false
	I0404 21:54:21.005914   27181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2206,"bootTime":1712265455,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:54:21.005979   27181 start.go:139] virtualization: kvm guest
	I0404 21:54:21.008504   27181 out.go:177] * [ha-454952] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:54:21.010754   27181 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:54:21.010783   27181 notify.go:220] Checking for updates...
	I0404 21:54:21.013654   27181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:54:21.015080   27181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:54:21.016295   27181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:54:21.017881   27181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:54:21.019248   27181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:54:21.021201   27181 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:54:21.021286   27181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:54:21.021684   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:54:21.021739   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:54:21.038121   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44891
	I0404 21:54:21.038499   27181 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:54:21.039017   27181 main.go:141] libmachine: Using API Version  1
	I0404 21:54:21.039041   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:54:21.039372   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:54:21.039558   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.079921   27181 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 21:54:21.081214   27181 start.go:297] selected driver: kvm2
	I0404 21:54:21.081227   27181 start.go:901] validating driver "kvm2" against &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:54:21.081365   27181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:54:21.081660   27181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:54:21.081722   27181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:54:21.097862   27181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:54:21.098524   27181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:54:21.098590   27181 cni.go:84] Creating CNI manager for ""
	I0404 21:54:21.098598   27181 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0404 21:54:21.098655   27181 start.go:340] cluster config:
	{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:54:21.098779   27181 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:54:21.100749   27181 out.go:177] * Starting "ha-454952" primary control-plane node in "ha-454952" cluster
	I0404 21:54:21.102207   27181 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:54:21.102239   27181 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:54:21.102246   27181 cache.go:56] Caching tarball of preloaded images
	I0404 21:54:21.102311   27181 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:54:21.102322   27181 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:54:21.102463   27181 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:54:21.102646   27181 start.go:360] acquireMachinesLock for ha-454952: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:54:21.102692   27181 start.go:364] duration metric: took 28.551µs to acquireMachinesLock for "ha-454952"
	I0404 21:54:21.102703   27181 start.go:96] Skipping create...Using existing machine configuration
	I0404 21:54:21.102708   27181 fix.go:54] fixHost starting: 
	I0404 21:54:21.102948   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:54:21.102977   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:54:21.117131   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0404 21:54:21.117575   27181 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:54:21.118111   27181 main.go:141] libmachine: Using API Version  1
	I0404 21:54:21.118136   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:54:21.118466   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:54:21.118632   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.118792   27181 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:54:21.120488   27181 fix.go:112] recreateIfNeeded on ha-454952: state=Running err=<nil>
	W0404 21:54:21.120522   27181 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 21:54:21.123711   27181 out.go:177] * Updating the running kvm2 "ha-454952" VM ...
	I0404 21:54:21.125202   27181 machine.go:94] provisionDockerMachine start ...
	I0404 21:54:21.125220   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.125414   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.127665   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.128055   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.128078   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.128184   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.128403   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.128571   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.128734   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.128883   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.129069   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.129087   27181 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 21:54:21.250515   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:54:21.250558   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.250814   27181 buildroot.go:166] provisioning hostname "ha-454952"
	I0404 21:54:21.250844   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.251053   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.254004   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.254445   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.254469   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.254767   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.254957   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.255154   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.255307   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.255474   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.255674   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.255692   27181 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952 && echo "ha-454952" | sudo tee /etc/hostname
	I0404 21:54:21.395447   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:54:21.395485   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.398337   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.398774   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.398808   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.399085   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.399296   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.399484   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.399610   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.399813   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.399965   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.399980   27181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:54:21.517652   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:54:21.517688   27181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:54:21.517732   27181 buildroot.go:174] setting up certificates
	I0404 21:54:21.517743   27181 provision.go:84] configureAuth start
	I0404 21:54:21.517756   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.517993   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:54:21.520970   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.521364   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.521381   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.521526   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.523625   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.523946   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.523976   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.524163   27181 provision.go:143] copyHostCerts
	I0404 21:54:21.524191   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:54:21.524226   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:54:21.524234   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:54:21.524303   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:54:21.524386   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:54:21.524402   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:54:21.524409   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:54:21.524432   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:54:21.524486   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:54:21.524501   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:54:21.524507   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:54:21.524528   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:54:21.524622   27181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952 san=[127.0.0.1 192.168.39.13 ha-454952 localhost minikube]
	I0404 21:54:21.777637   27181 provision.go:177] copyRemoteCerts
	I0404 21:54:21.777690   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:54:21.777712   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.780792   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.781185   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.781215   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.781406   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.781739   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.781960   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.782104   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:54:21.873308   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:54:21.873406   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:54:21.903055   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:54:21.903139   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:54:21.937605   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:54:21.937683   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0404 21:54:21.975083   27181 provision.go:87] duration metric: took 457.327896ms to configureAuth
	I0404 21:54:21.975116   27181 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:54:21.975349   27181 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:54:21.975454   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.978275   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.978653   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.978675   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.978840   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.979028   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.979150   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.979278   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.979413   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.979577   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.979592   27181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:55:52.867867   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:55:52.867891   27181 machine.go:97] duration metric: took 1m31.742674738s to provisionDockerMachine
	I0404 21:55:52.867907   27181 start.go:293] postStartSetup for "ha-454952" (driver="kvm2")
	I0404 21:55:52.867918   27181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:55:52.867931   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:52.868353   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:55:52.868393   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:52.871209   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:52.871649   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:52.871694   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:52.871798   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:52.871977   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:52.872152   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:52.872304   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:52.964894   27181 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:55:52.969624   27181 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:55:52.969648   27181 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:55:52.969709   27181 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:55:52.969772   27181 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:55:52.969782   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:55:52.969855   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:55:52.980328   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:55:53.007870   27181 start.go:296] duration metric: took 139.9513ms for postStartSetup
	I0404 21:55:53.007914   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.008228   27181 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0404 21:55:53.008255   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.011073   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.011508   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.011538   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.011693   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.011895   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.012063   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.012224   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	W0404 21:55:53.099730   27181 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0404 21:55:53.099758   27181 fix.go:56] duration metric: took 1m31.997048796s for fixHost
	I0404 21:55:53.099781   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.102642   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.103142   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.103173   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.103346   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.103541   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.103734   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.103904   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.104059   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:55:53.104255   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:55:53.104267   27181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:55:53.221235   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267753.187426526
	
	I0404 21:55:53.221256   27181 fix.go:216] guest clock: 1712267753.187426526
	I0404 21:55:53.221263   27181 fix.go:229] Guest: 2024-04-04 21:55:53.187426526 +0000 UTC Remote: 2024-04-04 21:55:53.099766002 +0000 UTC m=+92.143139349 (delta=87.660524ms)
	I0404 21:55:53.221292   27181 fix.go:200] guest clock delta is within tolerance: 87.660524ms
	I0404 21:55:53.221297   27181 start.go:83] releasing machines lock for "ha-454952", held for 1m32.118598573s
	I0404 21:55:53.221320   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.221585   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:55:53.224261   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.224650   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.224681   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.224853   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225389   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225572   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225663   27181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:55:53.225711   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.225883   27181 ssh_runner.go:195] Run: cat /version.json
	I0404 21:55:53.225907   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.228601   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.228968   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229015   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.229037   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229129   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.229305   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.229444   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.229490   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.229514   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229585   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:53.229618   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.229770   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.229921   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.230052   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:53.345414   27181 ssh_runner.go:195] Run: systemctl --version
	I0404 21:55:53.352521   27181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:55:53.524115   27181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:55:53.531412   27181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:55:53.531472   27181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:55:53.543074   27181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0404 21:55:53.543098   27181 start.go:494] detecting cgroup driver to use...
	I0404 21:55:53.543155   27181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:55:53.567023   27181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:55:53.583777   27181 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:55:53.583837   27181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:55:53.600423   27181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:55:53.616063   27181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:55:53.797935   27181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:55:53.954446   27181 docker.go:233] disabling docker service ...
	I0404 21:55:53.954512   27181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:55:53.973478   27181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:55:53.988665   27181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:55:54.142670   27181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:55:54.293766   27181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:55:54.310919   27181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:55:54.332437   27181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:55:54.332508   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.344595   27181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:55:54.344660   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.355996   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.367319   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.378633   27181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:55:54.390388   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.402769   27181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.415193   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.426296   27181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:55:54.436174   27181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:55:54.446519   27181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:55:54.598483   27181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:55:58.300965   27181 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.702434241s)
	I0404 21:55:58.301015   27181 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:55:58.301061   27181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:55:58.306850   27181 start.go:562] Will wait 60s for crictl version
	I0404 21:55:58.306898   27181 ssh_runner.go:195] Run: which crictl
	I0404 21:55:58.311205   27181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:55:58.353542   27181 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:55:58.353625   27181 ssh_runner.go:195] Run: crio --version
	I0404 21:55:58.386691   27181 ssh_runner.go:195] Run: crio --version
	I0404 21:55:58.419949   27181 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:55:58.421293   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:55:58.424066   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:58.424556   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:58.424584   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:58.424791   27181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:55:58.429819   27181 kubeadm.go:877] updating cluster {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 21:55:58.429968   27181 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:55:58.430009   27181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:55:58.476404   27181 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:55:58.476425   27181 crio.go:433] Images already preloaded, skipping extraction
	I0404 21:55:58.476473   27181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:55:58.511567   27181 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:55:58.511594   27181 cache_images.go:84] Images are preloaded, skipping loading
	I0404 21:55:58.511605   27181 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.29.3 crio true true} ...
	I0404 21:55:58.511729   27181 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:55:58.511805   27181 ssh_runner.go:195] Run: crio config
	I0404 21:55:58.564003   27181 cni.go:84] Creating CNI manager for ""
	I0404 21:55:58.564025   27181 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0404 21:55:58.564035   27181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 21:55:58.564064   27181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-454952 NodeName:ha-454952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 21:55:58.564230   27181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-454952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 21:55:58.564264   27181 kube-vip.go:111] generating kube-vip config ...
	I0404 21:55:58.564315   27181 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:55:58.576269   27181 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:55:58.576428   27181 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:55:58.576487   27181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:55:58.587007   27181 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 21:55:58.587076   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0404 21:55:58.597399   27181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0404 21:55:58.614841   27181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:55:58.632756   27181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0404 21:55:58.652474   27181 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:55:58.673658   27181 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:55:58.678100   27181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:55:58.836919   27181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:55:58.854024   27181 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.13
	I0404 21:55:58.854048   27181 certs.go:194] generating shared ca certs ...
	I0404 21:55:58.854064   27181 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:58.854276   27181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:55:58.854343   27181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:55:58.854357   27181 certs.go:256] generating profile certs ...
	I0404 21:55:58.854423   27181 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:55:58.854449   27181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0
	I0404 21:55:58.854463   27181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.217 192.168.39.254]
	I0404 21:55:59.063351   27181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 ...
	I0404 21:55:59.063382   27181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0: {Name:mk5433d65ebbc99dc168542c7e560d66181820c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:59.063542   27181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0 ...
	I0404 21:55:59.063554   27181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0: {Name:mk01ba783c7e8d0935e0f7a584b7b8848c4c01dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:59.063624   27181 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:55:59.063769   27181 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:55:59.063896   27181 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:55:59.063911   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:55:59.063927   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:55:59.063941   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:55:59.063951   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:55:59.063969   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:55:59.063981   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:55:59.063992   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:55:59.064003   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:55:59.064041   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:55:59.064066   27181 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:55:59.064075   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:55:59.064099   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:55:59.064137   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:55:59.064157   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:55:59.064194   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:55:59.064217   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.064230   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.064242   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.064818   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:55:59.093429   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:55:59.119285   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:55:59.146917   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:55:59.173303   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 21:55:59.200852   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 21:55:59.227698   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:55:59.254649   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:55:59.280699   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:55:59.307064   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:55:59.333591   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:55:59.359579   27181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 21:55:59.406083   27181 ssh_runner.go:195] Run: openssl version
	I0404 21:55:59.421057   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:55:59.434123   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.439412   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.439469   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.446015   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:55:59.456399   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:55:59.469371   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.474477   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.474541   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.480714   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:55:59.490801   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:55:59.502144   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.507930   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.507995   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.514189   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:55:59.524145   27181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:55:59.529373   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 21:55:59.535677   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 21:55:59.542836   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 21:55:59.549012   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 21:55:59.555011   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 21:55:59.561006   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 21:55:59.566941   27181 kubeadm.go:391] StartCluster: {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:55:59.567070   27181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 21:55:59.567119   27181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 21:55:59.608244   27181 cri.go:89] found id: "40c7e5205a7f2c7dca97128c4b3db2baa0cb5a2a2d71e6e0041bb8dcc5e62085"
	I0404 21:55:59.608270   27181 cri.go:89] found id: "3f1f1315a8b9166afc7caab3311dbe513f1259bca49d23c86707d7c46cd90718"
	I0404 21:55:59.608276   27181 cri.go:89] found id: "f0e949f71327f502a0430c391da79a4e330beb8aa9171c6f6cc4c6f1627b6008"
	I0404 21:55:59.608281   27181 cri.go:89] found id: "9e1a578fe0ac02dadad448b39bd45569b6296c53aa66eab5d21d43b7572cd092"
	I0404 21:55:59.608285   27181 cri.go:89] found id: "f5da670e72260df047daddb872854e23d680c6e1ba40671362700eb4dcc9b43e"
	I0404 21:55:59.608293   27181 cri.go:89] found id: "a65041cfea93095250c3fdc69a6eb0688089d798ad9c23a920244aea4d408dbf"
	I0404 21:55:59.608297   27181 cri.go:89] found id: "2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f"
	I0404 21:55:59.608301   27181 cri.go:89] found id: "b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c"
	I0404 21:55:59.608304   27181 cri.go:89] found id: "90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05"
	I0404 21:55:59.608311   27181 cri.go:89] found id: "a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d"
	I0404 21:55:59.608315   27181 cri.go:89] found id: "c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65"
	I0404 21:55:59.608319   27181 cri.go:89] found id: "e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048"
	I0404 21:55:59.608333   27181 cri.go:89] found id: "a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1"
	I0404 21:55:59.608341   27181 cri.go:89] found id: "72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3"
	I0404 21:55:59.608355   27181 cri.go:89] found id: ""
	I0404 21:55:59.608415   27181 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.122068842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267901122042719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f74b5e3-0ee8-4ff5-b583-f163d214f6fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.123003600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cae5f4d4-71b6-4356-9bc3-5b9dc57d3d36 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.123240681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cae5f4d4-71b6-4356-9bc3-5b9dc57d3d36 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.123887811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cae5f4d4-71b6-4356-9bc3-5b9dc57d3d36 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.170228778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4d2e1e4-f015-4253-b52d-b836d3f3a51c name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.170353806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4d2e1e4-f015-4253-b52d-b836d3f3a51c name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.171818859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5c30fe9-2f99-4b12-944f-2ce8b4601ff3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.172494704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267901172469450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5c30fe9-2f99-4b12-944f-2ce8b4601ff3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.173057263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=184ead32-d2f5-4548-abbd-02ff8c2103ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.173142349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=184ead32-d2f5-4548-abbd-02ff8c2103ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.173663206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=184ead32-d2f5-4548-abbd-02ff8c2103ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.226519803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e2b81be-9e85-4587-a4ef-8a68e74f085f name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.226616574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e2b81be-9e85-4587-a4ef-8a68e74f085f name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.234463291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8789b3b6-5d67-4232-9bc2-73532f5d1468 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.235244274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267901235207902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8789b3b6-5d67-4232-9bc2-73532f5d1468 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.235940763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6670af9-57be-474e-b7c9-7a9a3c6e65de name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.236002279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6670af9-57be-474e-b7c9-7a9a3c6e65de name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.236385007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6670af9-57be-474e-b7c9-7a9a3c6e65de name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.282045323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff56f52d-5515-4d88-a7b1-95299cda202e name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.282146184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff56f52d-5515-4d88-a7b1-95299cda202e name=/runtime.v1.RuntimeService/Version
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.283374431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6cfea65-aae8-4c7c-a8e6-6bb93122b08f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.283913981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712267901283887626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6cfea65-aae8-4c7c-a8e6-6bb93122b08f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.284668098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08426680-1e2d-478f-830f-bca5110d58ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.284823085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08426680-1e2d-478f-830f-bca5110d58ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 21:58:21 ha-454952 crio[3915]: time="2024-04-04 21:58:21.285263712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08426680-1e2d-478f-830f-bca5110d58ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4a6cc4d61d02b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   af7f752f0dead       kindnet-v8wv6
	5c6afa481405f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   be9d1c66a7fb2       storage-provisioner
	9b188c8442602       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   3c2cbdd1490fd       kube-controller-manager-ha-454952
	e953bf3c89fc4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   01a1479528f6d       busybox-7fdf7869d9-q56fw
	b0e409496e2bf       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            3                   8608a38b43967       kube-apiserver-ha-454952
	71a66b8b586a4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   ac16fed91142a       kube-vip-ha-454952
	a3a568a83338d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   3bafbcd423401       kube-proxy-gjvm9
	bc8218a5029f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   be9d1c66a7fb2       storage-provisioner
	2a9141f11f662       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   1ec2a54c59b20       coredns-76f75df574-9qsz7
	eee2070be0d0e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   af7f752f0dead       kindnet-v8wv6
	a678a5fd4129c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   787cebef1eeef       coredns-76f75df574-hsdfw
	b9b43e4ef90fb       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   3c2cbdd1490fd       kube-controller-manager-ha-454952
	9ee36899c21ef       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            2                   8608a38b43967       kube-apiserver-ha-454952
	aeb3e4500f198       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   7c0af8cf9edf4       kube-scheduler-ha-454952
	ebbea1fa16613       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   11ae4dcd2ed2a       etcd-ha-454952
	85478f2f51e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   2c8e166c4509c       busybox-7fdf7869d9-q56fw
	2f6afcac0a6b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   b1934889b30c3       coredns-76f75df574-9qsz7
	b3fc8d8ef023d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   0b786dbf91033       coredns-76f75df574-hsdfw
	90c39a2687464       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago       Exited              kube-proxy                0                   2748de75b7d2d       kube-proxy-gjvm9
	e9faec0816d4c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago       Exited              kube-scheduler            0                   9f1d5c3d0af96       kube-scheduler-ha-454952
	72549bccc4ca2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   92d02e4d213b3       etcd-ha-454952
	
	
	==> coredns [2a9141f11f6629018a0b27dac80b27b6813a81e074b69b7db8c3a549a51a5209] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[236522688]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.680) (total time: 10162ms):
	Trace[236522688]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer 10162ms (21:56:26.843)
	Trace[236522688]: [10.162687737s] [10.162687737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56616->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1143492343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.658) (total time: 10286ms):
	Trace[1143492343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer 10286ms (21:56:26.945)
	Trace[1143492343]: [10.286644886s] [10.286644886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f] <==
	[INFO] 10.244.2.2:49348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146448s
	[INFO] 10.244.2.2:48867 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138618s
	[INFO] 10.244.0.4:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070304s
	[INFO] 10.244.1.2:58936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144716s
	[INFO] 10.244.1.2:43170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002050369s
	[INFO] 10.244.1.2:59811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149418s
	[INFO] 10.244.1.2:58173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001389488s
	[INFO] 10.244.1.2:50742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078385s
	[INFO] 10.244.1.2:46973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077499s
	[INFO] 10.244.2.2:43785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153069s
	[INFO] 10.244.2.2:37406 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074939s
	[INFO] 10.244.0.4:41091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141133s
	[INFO] 10.244.0.4:44476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202801s
	[INFO] 10.244.0.4:45234 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104556s
	[INFO] 10.244.1.2:39647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182075s
	[INFO] 10.244.1.2:50588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151414s
	[INFO] 10.244.1.2:41606 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195991s
	[INFO] 10.244.2.2:53483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232191s
	[INFO] 10.244.2.2:60437 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132599s
	[INFO] 10.244.1.2:51965 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166052s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[457864532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:10.069) (total time: 10001ms):
	Trace[457864532]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:56:20.070)
	Trace[457864532]: [10.001189318s] [10.001189318s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2020089354]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.706) (total time: 10238ms):
	Trace[2020089354]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer 10238ms (21:56:26.945)
	Trace[2020089354]: [10.238860965s] [10.238860965s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c] <==
	[INFO] 10.244.0.4:51293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085331s
	[INFO] 10.244.0.4:55321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087493s
	[INFO] 10.244.0.4:59685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001579648s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157393s
	[INFO] 10.244.0.4:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109886s
	[INFO] 10.244.1.2:59156 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010739s
	[INFO] 10.244.1.2:53747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144738s
	[INFO] 10.244.2.2:48166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144032s
	[INFO] 10.244.2.2:36301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211342s
	[INFO] 10.244.0.4:34383 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072486s
	[INFO] 10.244.1.2:47623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275299s
	[INFO] 10.244.2.2:36199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000346157s
	[INFO] 10.244.2.2:51401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193332s
	[INFO] 10.244.0.4:48691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082711s
	[INFO] 10.244.0.4:37702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047018s
	[INFO] 10.244.0.4:59456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.0.4:56014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070317s
	[INFO] 10.244.1.2:47145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204326s
	[INFO] 10.244.1.2:36898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127022s
	[INFO] 10.244.1.2:42608 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109931s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-454952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:58:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    ha-454952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bcaf06686d84ca785ca1e79fc3ee92b
	  System UUID:                9bcaf066-86d8-4ca7-85ca-1e79fc3ee92b
	  Boot ID:                    00b02ff9-8c43-4004-ab1c-4fcde5b8a674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q56fw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-76f75df574-9qsz7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-hsdfw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-454952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-v8wv6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-454952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-454952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-gjvm9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-454952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-454952                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 92s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-454952 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Warning  ContainerGCFailed        2m35s (x2 over 3m35s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           87s                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           81s                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	
	
	Name:               ha-454952-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:46:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:57:32 +0000   Thu, 04 Apr 2024 21:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:57:32 +0000   Thu, 04 Apr 2024 21:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:57:32 +0000   Thu, 04 Apr 2024 21:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:57:32 +0000   Thu, 04 Apr 2024 21:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-454952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f458ea60975d458aa9cb6e203993b49a
	  System UUID:                f458ea60-975d-458a-a9cb-6e203993b49a
	  Boot ID:                    c363bc48-d9ed-42ea-b93d-193390f6e28a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rshl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-454952-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7c9dv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-454952-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-454952-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6nkxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-454952-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-454952-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  NodeNotReady             8m37s                node-controller  Node ha-454952-m02 status is now: NodeNotReady
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           81s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           36s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	
	
	Name:               ha-454952-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_47_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:57:53 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:57:53 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:57:53 +0000   Thu, 04 Apr 2024 21:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:57:53 +0000   Thu, 04 Apr 2024 21:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-454952-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b7367a50ec545c4ae6fb446cfb73753
	  System UUID:                4b7367a5-0ec5-45c4-ae6f-b446cfb73753
	  Boot ID:                    b03218d1-d61a-459e-b061-328f7e3453ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-8qf48                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-454952-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-7v9fp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-454952-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-454952-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-fl4jh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-454952-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-454952-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-454952-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-454952-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node ha-454952-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node ha-454952-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node ha-454952-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-454952-m03 has been rebooted, boot id: b03218d1-d61a-459e-b061-328f7e3453ca
	  Normal   RegisteredNode           36s                node-controller  Node ha-454952-m03 event: Registered Node ha-454952-m03 in Controller
	
	
	Name:               ha-454952-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_48_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:58:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:58:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-454952-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eaf323303c74873975b4953c592319b
	  System UUID:                0eaf3233-03c7-4873-975b-4953c592319b
	  Boot ID:                    cf21f544-0ad6-43b6-a6f9-d781e0417766
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mmgj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m48s
	  kube-system                 kube-proxy-5j62j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 9m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m48s (x2 over 9m48s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m48s (x2 over 9m48s)  kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m48s (x2 over 9m48s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m47s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           9m43s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           9m43s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   NodeReady                9m38s                  kubelet          Node ha-454952-m04 status is now: NodeReady
	  Normal   RegisteredNode           87s                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           81s                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   NodeNotReady             47s                    node-controller  Node ha-454952-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                     kubelet          Node ha-454952-m04 has been rebooted, boot id: cf21f544-0ad6-43b6-a6f9-d781e0417766
	  Normal   NodeReady                9s                     kubelet          Node ha-454952-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  8s (x2 over 9s)        kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 9s)        kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 9s)        kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.060191] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.177107] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.307912] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.603000] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.064613] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478091] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.520027] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.408849] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.092051] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.761594] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 21:46] kauditd_printk_skb: 76 callbacks suppressed
	[Apr 4 21:52] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 4 21:55] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +0.166701] systemd-fstab-generator[3846]: Ignoring "noauto" option for root device
	[  +0.186909] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.154277] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.306196] systemd-fstab-generator[3900]: Ignoring "noauto" option for root device
	[  +4.229874] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	[  +0.091286] kauditd_printk_skb: 100 callbacks suppressed
	[Apr 4 21:56] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.636937] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.074127] kauditd_printk_skb: 1 callbacks suppressed
	[ +20.014370] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.552840] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3] <==
	{"level":"warn","ts":"2024-04-04T21:54:22.156004Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.448653Z","time spent":"707.343465ms","remote":"127.0.0.1:43708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-04T21:54:22.155543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.453013Z","time spent":"702.51763ms","remote":"127.0.0.1:43684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-04T21:54:22.155458Z","caller":"traceutil/trace.go:171","msg":"trace[819982146] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"386.57806ms","start":"2024-04-04T21:54:21.768875Z","end":"2024-04-04T21:54:22.155453Z","steps":["trace[819982146] 'agreement among raft nodes before linearized reading'  (duration: 368.683042ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:54:22.156437Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.768842Z","time spent":"387.586118ms","remote":"127.0.0.1:43616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:500 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-04T21:54:22.281431Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-04T21:54:22.281662Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281814Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281866Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281918Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281984Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.282019Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.282027Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282037Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282056Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.28215Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282233Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282295Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282333Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.285495Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-04T21:54:22.285778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-04T21:54:22.285821Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-454952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	
	
	==> etcd [ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20] <==
	{"level":"warn","ts":"2024-04-04T21:57:16.801066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:57:16.828647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:57:16.831386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:57:16.90111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1d3fba3e6c6ecbcd","from":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-04T21:57:19.118412Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.217:2380/version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:19.118464Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:20.978267Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"409f8332ca29f5e9","rtt":"0s","error":"dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:20.978324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"409f8332ca29f5e9","rtt":"0s","error":"dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:23.120754Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.217:2380/version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:23.120908Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:25.979488Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"409f8332ca29f5e9","rtt":"0s","error":"dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:25.979768Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"409f8332ca29f5e9","rtt":"0s","error":"dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:27.123244Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.217:2380/version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-04T21:57:27.123304Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"409f8332ca29f5e9","error":"Get \"https://192.168.39.217:2380/version\": dial tcp 192.168.39.217:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-04T21:57:27.405136Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1d3fba3e6c6ecbcd","to":"409f8332ca29f5e9","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-04T21:57:27.405315Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:57:27.405441Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:57:27.424184Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1d3fba3e6c6ecbcd","to":"409f8332ca29f5e9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-04T21:57:27.424253Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:57:27.453604Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:57:27.453727Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-04T21:57:27.4581Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:57:27.460787Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-04T21:57:27.460995Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45170","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-04T21:57:44.059947Z","caller":"traceutil/trace.go:171","msg":"trace[314385097] transaction","detail":"{read_only:false; response_revision:2442; number_of_response:1; }","duration":"116.761136ms","start":"2024-04-04T21:57:43.943139Z","end":"2024-04-04T21:57:44.0599Z","steps":["trace[314385097] 'process raft request'  (duration: 116.425736ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:58:22 up 14 min,  0 users,  load average: 0.58, 0.58, 0.41
	Linux ha-454952 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e] <==
	I0404 21:57:46.837291       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:57:56.847236       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:57:56.847280       1 main.go:227] handling current node
	I0404 21:57:56.847293       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:57:56.847299       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:57:56.847412       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:57:56.847418       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:57:56.847456       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:57:56.847460       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:58:06.860901       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:58:06.861105       1 main.go:227] handling current node
	I0404 21:58:06.861155       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:58:06.861192       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:58:06.861968       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:58:06.862541       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:58:06.862624       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:58:06.862630       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 21:58:16.870750       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 21:58:16.870868       1 main.go:227] handling current node
	I0404 21:58:16.870911       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 21:58:16.870942       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 21:58:16.871138       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0404 21:58:16.871179       1 main.go:250] Node ha-454952-m03 has CIDR [10.244.2.0/24] 
	I0404 21:58:16.871266       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 21:58:16.871286       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419] <==
	I0404 21:56:05.482543       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0404 21:56:05.482791       1 main.go:107] hostIP = 192.168.39.13
	podIP = 192.168.39.13
	I0404 21:56:05.483026       1 main.go:116] setting mtu 1500 for CNI 
	I0404 21:56:05.483075       1 main.go:146] kindnetd IP family: "ipv4"
	I0404 21:56:05.483118       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0404 21:56:08.514431       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0404 21:56:11.585226       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0404 21:56:22.591301       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0404 21:56:26.945200       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.117:36178->10.96.0.1:443: read: connection reset by peer
	I0404 21:56:29.946371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b] <==
	I0404 21:56:05.365776       1 options.go:222] external host was not specified, using 192.168.39.13
	I0404 21:56:05.370872       1 server.go:148] Version: v1.29.3
	I0404 21:56:05.370929       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:05.810788       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0404 21:56:05.832748       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0404 21:56:05.833751       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0404 21:56:05.834067       1 instance.go:297] Using reconciler: lease
	W0404 21:56:25.800806       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0404 21:56:25.800806       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0404 21:56:25.835801       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7] <==
	I0404 21:56:38.025184       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0404 21:56:38.027151       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0404 21:56:38.027348       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0404 21:56:38.046462       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 21:56:38.048774       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 21:56:38.109607       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 21:56:38.120578       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0404 21:56:38.121061       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 21:56:38.123020       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 21:56:38.123112       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 21:56:38.123891       1 aggregator.go:165] initial CRD sync complete...
	I0404 21:56:38.123931       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 21:56:38.123938       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 21:56:38.123943       1 cache.go:39] Caches are synced for autoregister controller
	I0404 21:56:38.124567       1 shared_informer.go:318] Caches are synced for configmaps
	I0404 21:56:38.128277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 21:56:38.128342       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0404 21:56:38.128366       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0404 21:56:38.206521       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0404 21:56:38.208294       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 21:56:38.221637       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0404 21:56:38.227999       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0404 21:56:39.032221       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0404 21:56:40.262359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.217]
	W0404 21:56:50.252496       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.60]
	
	
	==> kube-controller-manager [9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c] <==
	I0404 21:57:00.811647       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0404 21:57:00.811900       1 shared_informer.go:318] Caches are synced for disruption
	I0404 21:57:00.811981       1 shared_informer.go:318] Caches are synced for service account
	I0404 21:57:00.812215       1 shared_informer.go:318] Caches are synced for namespace
	I0404 21:57:00.812964       1 shared_informer.go:318] Caches are synced for PVC protection
	I0404 21:57:00.816007       1 shared_informer.go:318] Caches are synced for GC
	I0404 21:57:00.820762       1 shared_informer.go:318] Caches are synced for endpoint
	I0404 21:57:00.827427       1 shared_informer.go:318] Caches are synced for TTL
	I0404 21:57:00.832227       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0404 21:57:00.837713       1 shared_informer.go:318] Caches are synced for deployment
	I0404 21:57:00.848391       1 shared_informer.go:318] Caches are synced for PV protection
	I0404 21:57:00.867906       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0404 21:57:00.881394       1 shared_informer.go:318] Caches are synced for HPA
	I0404 21:57:00.988015       1 shared_informer.go:318] Caches are synced for daemon sets
	I0404 21:57:01.019919       1 shared_informer.go:318] Caches are synced for stateful set
	I0404 21:57:01.020113       1 shared_informer.go:318] Caches are synced for resource quota
	I0404 21:57:01.032120       1 shared_informer.go:318] Caches are synced for resource quota
	I0404 21:57:01.366810       1 shared_informer.go:318] Caches are synced for garbage collector
	I0404 21:57:01.400257       1 shared_informer.go:318] Caches are synced for garbage collector
	I0404 21:57:01.400347       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0404 21:57:23.534834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="41.24675ms"
	I0404 21:57:23.534959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.512µs"
	I0404 21:57:47.929014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.580619ms"
	I0404 21:57:47.930012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="403.351µs"
	I0404 21:58:12.916836       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-454952-m04"
	
	
	==> kube-controller-manager [b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4] <==
	I0404 21:56:06.415258       1 serving.go:380] Generated self-signed cert in-memory
	I0404 21:56:06.745295       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0404 21:56:06.745394       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:06.747865       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 21:56:06.748131       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 21:56:06.748279       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0404 21:56:06.749115       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0404 21:56:26.843820       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.13:8443/healthz\": dial tcp 192.168.39.13:8443: connect: connection refused"
	
	
	==> kube-proxy [90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05] <==
	E0404 21:53:03.233467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:03.233607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:03.233630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:03.233631       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:03.233899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081442       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081561       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.170648       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.170925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.171475       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.171535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.171967       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.172100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:37.601976       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:37.602227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:40.673310       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:40.674356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:40.674289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:40.674433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:54:20.609491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:54:20.609597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d] <==
	I0404 21:56:06.692184       1 server_others.go:72] "Using iptables proxy"
	E0404 21:56:08.129489       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:11.201400       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:14.273934       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:20.417563       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:32.706530       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0404 21:56:48.887588       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0404 21:56:48.970991       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 21:56:48.971133       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 21:56:48.971490       1 server_others.go:168] "Using iptables Proxier"
	I0404 21:56:48.976518       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 21:56:48.976922       1 server.go:865] "Version info" version="v1.29.3"
	I0404 21:56:48.977000       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:48.980202       1 config.go:188] "Starting service config controller"
	I0404 21:56:48.980543       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 21:56:48.980646       1 config.go:97] "Starting endpoint slice config controller"
	I0404 21:56:48.980787       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 21:56:48.982090       1 config.go:315] "Starting node config controller"
	I0404 21:56:48.986764       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 21:56:49.081505       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0404 21:56:49.081601       1 shared_informer.go:318] Caches are synced for service config
	I0404 21:56:49.086845       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a] <==
	W0404 21:56:34.548381       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:34.548491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:34.716295       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:34.716362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:34.942408       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:34.942508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.13:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:34.984574       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.13:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:34.984657       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.13:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.248404       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.13:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.248525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.13:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.406035       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.13:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.406155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.13:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.509436       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.13:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.509548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.13:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.662102       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.39.13:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.662233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.13:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.989472       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.13:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.989531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.13:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:38.064090       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 21:56:38.065143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 21:56:38.065279       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 21:56:38.066475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 21:56:38.066820       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 21:56:38.067047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0404 21:56:42.351372       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048] <==
	E0404 21:54:18.397350       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 21:54:18.493612       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 21:54:18.493811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0404 21:54:18.757833       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 21:54:18.757979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 21:54:18.798525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0404 21:54:18.798625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0404 21:54:18.992861       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0404 21:54:18.992958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0404 21:54:19.084774       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0404 21:54:19.084902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0404 21:54:19.291443       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0404 21:54:19.291553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0404 21:54:19.455413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0404 21:54:19.455444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0404 21:54:20.128871       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 21:54:20.128992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 21:54:20.146928       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 21:54:20.147044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 21:54:20.493305       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0404 21:54:20.493411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0404 21:54:22.113581       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0404 21:54:22.118600       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0404 21:54:22.119245       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0404 21:54:22.119560       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 04 21:56:38 ha-454952 kubelet[1393]: W0404 21:56:38.849138    1393 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 04 21:56:38 ha-454952 kubelet[1393]: E0404 21:56:38.849248    1393 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 04 21:56:38 ha-454952 kubelet[1393]: I0404 21:56:38.849608    1393 status_manager.go:853] "Failed to get status for pod" podUID="5af3d10e-47b7-439c-80e3-8ee328d87f16" pod="kube-system/coredns-76f75df574-9qsz7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-9qsz7\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 04 21:56:43 ha-454952 kubelet[1393]: I0404 21:56:43.699408    1393 scope.go:117] "RemoveContainer" containerID="bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46"
	Apr 04 21:56:43 ha-454952 kubelet[1393]: E0404 21:56:43.699828    1393 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8531ddb-fa9d-4efe-91cc-072e75a5897d)\"" pod="kube-system/storage-provisioner" podUID="c8531ddb-fa9d-4efe-91cc-072e75a5897d"
	Apr 04 21:56:46 ha-454952 kubelet[1393]: E0404 21:56:46.768242    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:56:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:56:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:56:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:56:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:56:48 ha-454952 kubelet[1393]: I0404 21:56:48.699786    1393 scope.go:117] "RemoveContainer" containerID="eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419"
	Apr 04 21:56:48 ha-454952 kubelet[1393]: E0404 21:56:48.700057    1393 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-v8wv6_kube-system(44250298-dce4-4e12-88c2-e347b4a63711)\"" pod="kube-system/kindnet-v8wv6" podUID="44250298-dce4-4e12-88c2-e347b4a63711"
	Apr 04 21:56:48 ha-454952 kubelet[1393]: I0404 21:56:48.701120    1393 scope.go:117] "RemoveContainer" containerID="b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4"
	Apr 04 21:56:57 ha-454952 kubelet[1393]: I0404 21:56:57.699759    1393 scope.go:117] "RemoveContainer" containerID="bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46"
	Apr 04 21:57:00 ha-454952 kubelet[1393]: I0404 21:57:00.699629    1393 scope.go:117] "RemoveContainer" containerID="eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419"
	Apr 04 21:57:00 ha-454952 kubelet[1393]: E0404 21:57:00.699949    1393 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-v8wv6_kube-system(44250298-dce4-4e12-88c2-e347b4a63711)\"" pod="kube-system/kindnet-v8wv6" podUID="44250298-dce4-4e12-88c2-e347b4a63711"
	Apr 04 21:57:07 ha-454952 kubelet[1393]: I0404 21:57:07.761891    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-q56fw" podStartSLOduration=550.52905852 podStartE2EDuration="9m12.761790564s" podCreationTimestamp="2024-04-04 21:47:55 +0000 UTC" firstStartedPulling="2024-04-04 21:47:57.100656675 +0000 UTC m=+190.614780474" lastFinishedPulling="2024-04-04 21:47:59.333388732 +0000 UTC m=+192.847512518" observedRunningTime="2024-04-04 21:47:59.62197176 +0000 UTC m=+193.136095571" watchObservedRunningTime="2024-04-04 21:57:07.761790564 +0000 UTC m=+741.275914362"
	Apr 04 21:57:15 ha-454952 kubelet[1393]: I0404 21:57:15.699511    1393 scope.go:117] "RemoveContainer" containerID="eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419"
	Apr 04 21:57:27 ha-454952 kubelet[1393]: I0404 21:57:27.700238    1393 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-454952" podUID="87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b"
	Apr 04 21:57:27 ha-454952 kubelet[1393]: I0404 21:57:27.723150    1393 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-454952"
	Apr 04 21:57:46 ha-454952 kubelet[1393]: E0404 21:57:46.749567    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:57:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 21:58:20.799853   28254 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16143-5297/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-454952 -n ha-454952
helpers_test.go:261: (dbg) Run:  kubectl --context ha-454952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 stop -v=7 --alsologtostderr
E0404 21:58:50.479825   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:00:13.527344   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 stop -v=7 --alsologtostderr: exit status 82 (2m0.496467108s)

                                                
                                                
-- stdout --
	* Stopping node "ha-454952-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:58:40.948041   28642 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:58:40.948327   28642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:58:40.948337   28642 out.go:304] Setting ErrFile to fd 2...
	I0404 21:58:40.948341   28642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:58:40.948513   28642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:58:40.948734   28642 out.go:298] Setting JSON to false
	I0404 21:58:40.948808   28642 mustload.go:65] Loading cluster: ha-454952
	I0404 21:58:40.949224   28642 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:58:40.949312   28642 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:58:40.949486   28642 mustload.go:65] Loading cluster: ha-454952
	I0404 21:58:40.949630   28642 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:58:40.949673   28642 stop.go:39] StopHost: ha-454952-m04
	I0404 21:58:40.950107   28642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:58:40.950155   28642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:58:40.965038   28642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0404 21:58:40.965534   28642 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:58:40.966120   28642 main.go:141] libmachine: Using API Version  1
	I0404 21:58:40.966155   28642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:58:40.966540   28642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:58:40.969087   28642 out.go:177] * Stopping node "ha-454952-m04"  ...
	I0404 21:58:40.970445   28642 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 21:58:40.970473   28642 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 21:58:40.970716   28642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 21:58:40.970743   28642 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 21:58:40.973828   28642 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:58:40.974325   28642 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:58:07 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 21:58:40.974358   28642 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 21:58:40.974548   28642 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 21:58:40.974715   28642 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 21:58:40.974884   28642 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 21:58:40.975014   28642 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	I0404 21:58:41.059126   28642 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 21:58:41.113587   28642 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 21:58:41.167331   28642 main.go:141] libmachine: Stopping "ha-454952-m04"...
	I0404 21:58:41.167360   28642 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 21:58:41.168863   28642 main.go:141] libmachine: (ha-454952-m04) Calling .Stop
	I0404 21:58:41.172329   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 0/120
	I0404 21:58:42.173787   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 1/120
	I0404 21:58:43.175233   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 2/120
	I0404 21:58:44.176556   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 3/120
	I0404 21:58:45.178137   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 4/120
	I0404 21:58:46.180310   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 5/120
	I0404 21:58:47.183059   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 6/120
	I0404 21:58:48.184654   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 7/120
	I0404 21:58:49.186585   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 8/120
	I0404 21:58:50.188265   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 9/120
	I0404 21:58:51.190012   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 10/120
	I0404 21:58:52.191651   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 11/120
	I0404 21:58:53.193452   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 12/120
	I0404 21:58:54.195004   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 13/120
	I0404 21:58:55.196713   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 14/120
	I0404 21:58:56.198666   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 15/120
	I0404 21:58:57.200340   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 16/120
	I0404 21:58:58.202672   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 17/120
	I0404 21:58:59.204447   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 18/120
	I0404 21:59:00.206656   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 19/120
	I0404 21:59:01.209041   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 20/120
	I0404 21:59:02.210783   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 21/120
	I0404 21:59:03.212340   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 22/120
	I0404 21:59:04.213672   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 23/120
	I0404 21:59:05.215439   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 24/120
	I0404 21:59:06.217423   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 25/120
	I0404 21:59:07.218788   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 26/120
	I0404 21:59:08.220226   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 27/120
	I0404 21:59:09.221576   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 28/120
	I0404 21:59:10.223058   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 29/120
	I0404 21:59:11.224639   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 30/120
	I0404 21:59:12.226732   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 31/120
	I0404 21:59:13.228167   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 32/120
	I0404 21:59:14.229375   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 33/120
	I0404 21:59:15.230993   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 34/120
	I0404 21:59:16.233398   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 35/120
	I0404 21:59:17.235738   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 36/120
	I0404 21:59:18.237592   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 37/120
	I0404 21:59:19.239287   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 38/120
	I0404 21:59:20.240930   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 39/120
	I0404 21:59:21.242573   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 40/120
	I0404 21:59:22.243981   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 41/120
	I0404 21:59:23.245356   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 42/120
	I0404 21:59:24.246784   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 43/120
	I0404 21:59:25.248501   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 44/120
	I0404 21:59:26.250334   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 45/120
	I0404 21:59:27.251759   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 46/120
	I0404 21:59:28.253043   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 47/120
	I0404 21:59:29.254604   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 48/120
	I0404 21:59:30.256188   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 49/120
	I0404 21:59:31.257456   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 50/120
	I0404 21:59:32.258673   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 51/120
	I0404 21:59:33.260244   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 52/120
	I0404 21:59:34.261431   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 53/120
	I0404 21:59:35.262919   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 54/120
	I0404 21:59:36.264205   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 55/120
	I0404 21:59:37.265399   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 56/120
	I0404 21:59:38.266932   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 57/120
	I0404 21:59:39.268225   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 58/120
	I0404 21:59:40.269778   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 59/120
	I0404 21:59:41.271758   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 60/120
	I0404 21:59:42.273796   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 61/120
	I0404 21:59:43.275566   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 62/120
	I0404 21:59:44.276888   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 63/120
	I0404 21:59:45.278887   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 64/120
	I0404 21:59:46.280345   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 65/120
	I0404 21:59:47.281912   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 66/120
	I0404 21:59:48.283490   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 67/120
	I0404 21:59:49.284770   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 68/120
	I0404 21:59:50.286154   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 69/120
	I0404 21:59:51.288701   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 70/120
	I0404 21:59:52.290578   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 71/120
	I0404 21:59:53.292297   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 72/120
	I0404 21:59:54.293621   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 73/120
	I0404 21:59:55.294969   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 74/120
	I0404 21:59:56.296981   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 75/120
	I0404 21:59:57.298667   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 76/120
	I0404 21:59:58.300111   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 77/120
	I0404 21:59:59.301667   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 78/120
	I0404 22:00:00.303621   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 79/120
	I0404 22:00:01.305798   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 80/120
	I0404 22:00:02.307339   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 81/120
	I0404 22:00:03.308903   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 82/120
	I0404 22:00:04.310504   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 83/120
	I0404 22:00:05.311937   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 84/120
	I0404 22:00:06.313767   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 85/120
	I0404 22:00:07.315351   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 86/120
	I0404 22:00:08.316878   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 87/120
	I0404 22:00:09.318502   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 88/120
	I0404 22:00:10.319794   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 89/120
	I0404 22:00:11.322225   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 90/120
	I0404 22:00:12.323577   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 91/120
	I0404 22:00:13.324855   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 92/120
	I0404 22:00:14.327059   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 93/120
	I0404 22:00:15.328671   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 94/120
	I0404 22:00:16.330820   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 95/120
	I0404 22:00:17.332222   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 96/120
	I0404 22:00:18.333641   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 97/120
	I0404 22:00:19.335117   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 98/120
	I0404 22:00:20.336649   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 99/120
	I0404 22:00:21.338513   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 100/120
	I0404 22:00:22.340448   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 101/120
	I0404 22:00:23.343111   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 102/120
	I0404 22:00:24.344974   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 103/120
	I0404 22:00:25.346678   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 104/120
	I0404 22:00:26.348577   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 105/120
	I0404 22:00:27.350757   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 106/120
	I0404 22:00:28.352609   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 107/120
	I0404 22:00:29.355054   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 108/120
	I0404 22:00:30.356715   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 109/120
	I0404 22:00:31.358906   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 110/120
	I0404 22:00:32.360599   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 111/120
	I0404 22:00:33.362451   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 112/120
	I0404 22:00:34.364195   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 113/120
	I0404 22:00:35.365600   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 114/120
	I0404 22:00:36.367793   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 115/120
	I0404 22:00:37.369174   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 116/120
	I0404 22:00:38.370957   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 117/120
	I0404 22:00:39.372565   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 118/120
	I0404 22:00:40.374592   28642 main.go:141] libmachine: (ha-454952-m04) Waiting for machine to stop 119/120
	I0404 22:00:41.375212   28642 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 22:00:41.375269   28642 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0404 22:00:41.377603   28642 out.go:177] 
	W0404 22:00:41.379548   28642 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0404 22:00:41.379570   28642 out.go:239] * 
	* 
	W0404 22:00:41.382052   28642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:00:41.383734   28642 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-454952 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr: exit status 3 (19.034108776s)

                                                
                                                
-- stdout --
	ha-454952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-454952-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:00:41.441936   28979 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:00:41.442062   28979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:00:41.442074   28979 out.go:304] Setting ErrFile to fd 2...
	I0404 22:00:41.442080   28979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:00:41.442296   28979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:00:41.442471   28979 out.go:298] Setting JSON to false
	I0404 22:00:41.442501   28979 mustload.go:65] Loading cluster: ha-454952
	I0404 22:00:41.442620   28979 notify.go:220] Checking for updates...
	I0404 22:00:41.442922   28979 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:00:41.442938   28979 status.go:255] checking status of ha-454952 ...
	I0404 22:00:41.443315   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.443381   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.466384   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0404 22:00:41.466861   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.467561   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.467618   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.468010   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.468261   28979 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 22:00:41.470105   28979 status.go:330] ha-454952 host status = "Running" (err=<nil>)
	I0404 22:00:41.470127   28979 host.go:66] Checking if "ha-454952" exists ...
	I0404 22:00:41.470438   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.470492   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.484909   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I0404 22:00:41.485324   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.485824   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.485859   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.486148   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.486327   28979 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 22:00:41.489249   28979 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 22:00:41.489752   28979 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 22:00:41.489791   28979 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 22:00:41.489933   28979 host.go:66] Checking if "ha-454952" exists ...
	I0404 22:00:41.490316   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.490357   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.505181   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0404 22:00:41.505613   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.506042   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.506062   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.506347   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.506549   28979 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 22:00:41.506719   28979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 22:00:41.506748   28979 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 22:00:41.509906   28979 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 22:00:41.510304   28979 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 22:00:41.510345   28979 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 22:00:41.510450   28979 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 22:00:41.510629   28979 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 22:00:41.510828   28979 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 22:00:41.510984   28979 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 22:00:41.606596   28979 ssh_runner.go:195] Run: systemctl --version
	I0404 22:00:41.613719   28979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:00:41.633030   28979 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 22:00:41.633076   28979 api_server.go:166] Checking apiserver status ...
	I0404 22:00:41.633115   28979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:00:41.650972   28979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5141/cgroup
	W0404 22:00:41.661362   28979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:00:41.661420   28979 ssh_runner.go:195] Run: ls
	I0404 22:00:41.666602   28979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 22:00:41.671364   28979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 22:00:41.671386   28979 status.go:422] ha-454952 apiserver status = Running (err=<nil>)
	I0404 22:00:41.671395   28979 status.go:257] ha-454952 status: &{Name:ha-454952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 22:00:41.671411   28979 status.go:255] checking status of ha-454952-m02 ...
	I0404 22:00:41.671722   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.671759   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.686450   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0404 22:00:41.686825   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.687259   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.687283   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.687556   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.687835   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetState
	I0404 22:00:41.689389   28979 status.go:330] ha-454952-m02 host status = "Running" (err=<nil>)
	I0404 22:00:41.689409   28979 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 22:00:41.689743   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.689778   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.704300   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0404 22:00:41.704704   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.705162   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.705185   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.705496   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.705670   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetIP
	I0404 22:00:41.708346   28979 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 22:00:41.708755   28979 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:56:11 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 22:00:41.708778   28979 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 22:00:41.708933   28979 host.go:66] Checking if "ha-454952-m02" exists ...
	I0404 22:00:41.709227   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.709267   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.723402   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0404 22:00:41.723775   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.724262   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.724294   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.724624   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.724797   28979 main.go:141] libmachine: (ha-454952-m02) Calling .DriverName
	I0404 22:00:41.724967   28979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 22:00:41.724997   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHHostname
	I0404 22:00:41.728168   28979 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 22:00:41.728581   28979 main.go:141] libmachine: (ha-454952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:de:98", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:56:11 +0000 UTC Type:0 Mac:52:54:00:0e:de:98 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-454952-m02 Clientid:01:52:54:00:0e:de:98}
	I0404 22:00:41.728604   28979 main.go:141] libmachine: (ha-454952-m02) DBG | domain ha-454952-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:de:98 in network mk-ha-454952
	I0404 22:00:41.728732   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHPort
	I0404 22:00:41.728897   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHKeyPath
	I0404 22:00:41.729071   28979 main.go:141] libmachine: (ha-454952-m02) Calling .GetSSHUsername
	I0404 22:00:41.729299   28979 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m02/id_rsa Username:docker}
	I0404 22:00:41.818557   28979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:00:41.839247   28979 kubeconfig.go:125] found "ha-454952" server: "https://192.168.39.254:8443"
	I0404 22:00:41.839274   28979 api_server.go:166] Checking apiserver status ...
	I0404 22:00:41.839321   28979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:00:41.855599   28979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1552/cgroup
	W0404 22:00:41.869938   28979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1552/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:00:41.870018   28979 ssh_runner.go:195] Run: ls
	I0404 22:00:41.874642   28979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0404 22:00:41.879245   28979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0404 22:00:41.879276   28979 status.go:422] ha-454952-m02 apiserver status = Running (err=<nil>)
	I0404 22:00:41.879288   28979 status.go:257] ha-454952-m02 status: &{Name:ha-454952-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 22:00:41.879320   28979 status.go:255] checking status of ha-454952-m04 ...
	I0404 22:00:41.879672   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.879717   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.896004   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0404 22:00:41.896457   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.897024   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.897052   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.897382   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.897583   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetState
	I0404 22:00:41.899067   28979 status.go:330] ha-454952-m04 host status = "Running" (err=<nil>)
	I0404 22:00:41.899082   28979 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 22:00:41.899348   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.899379   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.913910   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0404 22:00:41.914530   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.915037   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.915067   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.915352   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.915533   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetIP
	I0404 22:00:41.918452   28979 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 22:00:41.918945   28979 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:58:07 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 22:00:41.918990   28979 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 22:00:41.919157   28979 host.go:66] Checking if "ha-454952-m04" exists ...
	I0404 22:00:41.919424   28979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:00:41.919461   28979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:00:41.935320   28979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0404 22:00:41.935788   28979 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:00:41.936300   28979 main.go:141] libmachine: Using API Version  1
	I0404 22:00:41.936330   28979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:00:41.936663   28979 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:00:41.936857   28979 main.go:141] libmachine: (ha-454952-m04) Calling .DriverName
	I0404 22:00:41.937038   28979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 22:00:41.937057   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHHostname
	I0404 22:00:41.939651   28979 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 22:00:41.940114   28979 main.go:141] libmachine: (ha-454952-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:b1", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:58:07 +0000 UTC Type:0 Mac:52:54:00:1d:43:b1 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-454952-m04 Clientid:01:52:54:00:1d:43:b1}
	I0404 22:00:41.940170   28979 main.go:141] libmachine: (ha-454952-m04) DBG | domain ha-454952-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:1d:43:b1 in network mk-ha-454952
	I0404 22:00:41.940278   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHPort
	I0404 22:00:41.940490   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHKeyPath
	I0404 22:00:41.940702   28979 main.go:141] libmachine: (ha-454952-m04) Calling .GetSSHUsername
	I0404 22:00:41.940876   28979 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952-m04/id_rsa Username:docker}
	W0404 22:01:00.420345   28979 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0404 22:01:00.420432   28979 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0404 22:01:00.420447   28979 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0404 22:01:00.420456   28979 status.go:257] ha-454952-m04 status: &{Name:ha-454952-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0404 22:01:00.420474   28979 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-454952 -n ha-454952
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-454952 logs -n 25: (1.925416081s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m04 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp testdata/cp-test.txt                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952:/home/docker/cp-test_ha-454952-m04_ha-454952.txt                       |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952 sudo cat                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952.txt                                 |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m02:/home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m02 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m03:/home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n                                                                 | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | ha-454952-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-454952 ssh -n ha-454952-m03 sudo cat                                          | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC | 04 Apr 24 21:49 UTC |
	|         | /home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-454952 node stop m02 -v=7                                                     | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-454952 node start m02 -v=7                                                    | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-454952 -v=7                                                           | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-454952 -v=7                                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-454952 --wait=true -v=7                                                    | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:54 UTC | 04 Apr 24 21:58 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-454952                                                                | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:58 UTC |                     |
	| node    | ha-454952 node delete m03 -v=7                                                   | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:58 UTC | 04 Apr 24 21:58 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-454952 stop -v=7                                                              | ha-454952 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:54:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:54:21.004101   27181 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:54:21.004246   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:54:21.004256   27181 out.go:304] Setting ErrFile to fd 2...
	I0404 21:54:21.004260   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:54:21.004458   27181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:54:21.004999   27181 out.go:298] Setting JSON to false
	I0404 21:54:21.005914   27181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2206,"bootTime":1712265455,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:54:21.005979   27181 start.go:139] virtualization: kvm guest
	I0404 21:54:21.008504   27181 out.go:177] * [ha-454952] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:54:21.010754   27181 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:54:21.010783   27181 notify.go:220] Checking for updates...
	I0404 21:54:21.013654   27181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:54:21.015080   27181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:54:21.016295   27181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:54:21.017881   27181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:54:21.019248   27181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:54:21.021201   27181 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:54:21.021286   27181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:54:21.021684   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:54:21.021739   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:54:21.038121   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44891
	I0404 21:54:21.038499   27181 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:54:21.039017   27181 main.go:141] libmachine: Using API Version  1
	I0404 21:54:21.039041   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:54:21.039372   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:54:21.039558   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.079921   27181 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 21:54:21.081214   27181 start.go:297] selected driver: kvm2
	I0404 21:54:21.081227   27181 start.go:901] validating driver "kvm2" against &{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:54:21.081365   27181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:54:21.081660   27181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:54:21.081722   27181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:54:21.097862   27181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:54:21.098524   27181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 21:54:21.098590   27181 cni.go:84] Creating CNI manager for ""
	I0404 21:54:21.098598   27181 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0404 21:54:21.098655   27181 start.go:340] cluster config:
	{Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:54:21.098779   27181 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:54:21.100749   27181 out.go:177] * Starting "ha-454952" primary control-plane node in "ha-454952" cluster
	I0404 21:54:21.102207   27181 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:54:21.102239   27181 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:54:21.102246   27181 cache.go:56] Caching tarball of preloaded images
	I0404 21:54:21.102311   27181 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 21:54:21.102322   27181 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 21:54:21.102463   27181 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/config.json ...
	I0404 21:54:21.102646   27181 start.go:360] acquireMachinesLock for ha-454952: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 21:54:21.102692   27181 start.go:364] duration metric: took 28.551µs to acquireMachinesLock for "ha-454952"
	I0404 21:54:21.102703   27181 start.go:96] Skipping create...Using existing machine configuration
	I0404 21:54:21.102708   27181 fix.go:54] fixHost starting: 
	I0404 21:54:21.102948   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:54:21.102977   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:54:21.117131   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0404 21:54:21.117575   27181 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:54:21.118111   27181 main.go:141] libmachine: Using API Version  1
	I0404 21:54:21.118136   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:54:21.118466   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:54:21.118632   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.118792   27181 main.go:141] libmachine: (ha-454952) Calling .GetState
	I0404 21:54:21.120488   27181 fix.go:112] recreateIfNeeded on ha-454952: state=Running err=<nil>
	W0404 21:54:21.120522   27181 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 21:54:21.123711   27181 out.go:177] * Updating the running kvm2 "ha-454952" VM ...
	I0404 21:54:21.125202   27181 machine.go:94] provisionDockerMachine start ...
	I0404 21:54:21.125220   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:54:21.125414   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.127665   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.128055   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.128078   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.128184   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.128403   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.128571   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.128734   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.128883   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.129069   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.129087   27181 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 21:54:21.250515   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:54:21.250558   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.250814   27181 buildroot.go:166] provisioning hostname "ha-454952"
	I0404 21:54:21.250844   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.251053   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.254004   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.254445   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.254469   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.254767   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.254957   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.255154   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.255307   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.255474   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.255674   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.255692   27181 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-454952 && echo "ha-454952" | sudo tee /etc/hostname
	I0404 21:54:21.395447   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-454952
	
	I0404 21:54:21.395485   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.398337   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.398774   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.398808   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.399085   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.399296   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.399484   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.399610   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.399813   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.399965   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.399980   27181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-454952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-454952/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-454952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 21:54:21.517652   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 21:54:21.517688   27181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 21:54:21.517732   27181 buildroot.go:174] setting up certificates
	I0404 21:54:21.517743   27181 provision.go:84] configureAuth start
	I0404 21:54:21.517756   27181 main.go:141] libmachine: (ha-454952) Calling .GetMachineName
	I0404 21:54:21.517993   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:54:21.520970   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.521364   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.521381   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.521526   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.523625   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.523946   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.523976   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.524163   27181 provision.go:143] copyHostCerts
	I0404 21:54:21.524191   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:54:21.524226   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 21:54:21.524234   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 21:54:21.524303   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 21:54:21.524386   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:54:21.524402   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 21:54:21.524409   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 21:54:21.524432   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 21:54:21.524486   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:54:21.524501   27181 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 21:54:21.524507   27181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 21:54:21.524528   27181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 21:54:21.524622   27181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.ha-454952 san=[127.0.0.1 192.168.39.13 ha-454952 localhost minikube]
	I0404 21:54:21.777637   27181 provision.go:177] copyRemoteCerts
	I0404 21:54:21.777690   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 21:54:21.777712   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.780792   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.781185   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.781215   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.781406   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.781739   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.781960   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.782104   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:54:21.873308   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 21:54:21.873406   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 21:54:21.903055   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 21:54:21.903139   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 21:54:21.937605   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 21:54:21.937683   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0404 21:54:21.975083   27181 provision.go:87] duration metric: took 457.327896ms to configureAuth
	I0404 21:54:21.975116   27181 buildroot.go:189] setting minikube options for container-runtime
	I0404 21:54:21.975349   27181 config.go:182] Loaded profile config "ha-454952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:54:21.975454   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:54:21.978275   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.978653   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:54:21.978675   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:54:21.978840   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:54:21.979028   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.979150   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:54:21.979278   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:54:21.979413   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:54:21.979577   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:54:21.979592   27181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 21:55:52.867867   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 21:55:52.867891   27181 machine.go:97] duration metric: took 1m31.742674738s to provisionDockerMachine
	I0404 21:55:52.867907   27181 start.go:293] postStartSetup for "ha-454952" (driver="kvm2")
	I0404 21:55:52.867918   27181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 21:55:52.867931   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:52.868353   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 21:55:52.868393   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:52.871209   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:52.871649   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:52.871694   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:52.871798   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:52.871977   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:52.872152   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:52.872304   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:52.964894   27181 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 21:55:52.969624   27181 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 21:55:52.969648   27181 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 21:55:52.969709   27181 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 21:55:52.969772   27181 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 21:55:52.969782   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 21:55:52.969855   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 21:55:52.980328   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:55:53.007870   27181 start.go:296] duration metric: took 139.9513ms for postStartSetup
	I0404 21:55:53.007914   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.008228   27181 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0404 21:55:53.008255   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.011073   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.011508   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.011538   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.011693   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.011895   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.012063   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.012224   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	W0404 21:55:53.099730   27181 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0404 21:55:53.099758   27181 fix.go:56] duration metric: took 1m31.997048796s for fixHost
	I0404 21:55:53.099781   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.102642   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.103142   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.103173   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.103346   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.103541   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.103734   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.103904   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.104059   27181 main.go:141] libmachine: Using SSH client type: native
	I0404 21:55:53.104255   27181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0404 21:55:53.104267   27181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 21:55:53.221235   27181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712267753.187426526
	
	I0404 21:55:53.221256   27181 fix.go:216] guest clock: 1712267753.187426526
	I0404 21:55:53.221263   27181 fix.go:229] Guest: 2024-04-04 21:55:53.187426526 +0000 UTC Remote: 2024-04-04 21:55:53.099766002 +0000 UTC m=+92.143139349 (delta=87.660524ms)
	I0404 21:55:53.221292   27181 fix.go:200] guest clock delta is within tolerance: 87.660524ms
	I0404 21:55:53.221297   27181 start.go:83] releasing machines lock for "ha-454952", held for 1m32.118598573s
	I0404 21:55:53.221320   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.221585   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:55:53.224261   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.224650   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.224681   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.224853   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225389   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225572   27181 main.go:141] libmachine: (ha-454952) Calling .DriverName
	I0404 21:55:53.225663   27181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 21:55:53.225711   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.225883   27181 ssh_runner.go:195] Run: cat /version.json
	I0404 21:55:53.225907   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHHostname
	I0404 21:55:53.228601   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.228968   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229015   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.229037   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229129   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.229305   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.229444   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.229490   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:53.229514   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:53.229585   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:53.229618   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHPort
	I0404 21:55:53.229770   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHKeyPath
	I0404 21:55:53.229921   27181 main.go:141] libmachine: (ha-454952) Calling .GetSSHUsername
	I0404 21:55:53.230052   27181 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/ha-454952/id_rsa Username:docker}
	I0404 21:55:53.345414   27181 ssh_runner.go:195] Run: systemctl --version
	I0404 21:55:53.352521   27181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 21:55:53.524115   27181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 21:55:53.531412   27181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 21:55:53.531472   27181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 21:55:53.543074   27181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0404 21:55:53.543098   27181 start.go:494] detecting cgroup driver to use...
	I0404 21:55:53.543155   27181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 21:55:53.567023   27181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 21:55:53.583777   27181 docker.go:217] disabling cri-docker service (if available) ...
	I0404 21:55:53.583837   27181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 21:55:53.600423   27181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 21:55:53.616063   27181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 21:55:53.797935   27181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 21:55:53.954446   27181 docker.go:233] disabling docker service ...
	I0404 21:55:53.954512   27181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 21:55:53.973478   27181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 21:55:53.988665   27181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 21:55:54.142670   27181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 21:55:54.293766   27181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 21:55:54.310919   27181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 21:55:54.332437   27181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 21:55:54.332508   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.344595   27181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 21:55:54.344660   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.355996   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.367319   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.378633   27181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 21:55:54.390388   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.402769   27181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.415193   27181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 21:55:54.426296   27181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 21:55:54.436174   27181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 21:55:54.446519   27181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:55:54.598483   27181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 21:55:58.300965   27181 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.702434241s)
	I0404 21:55:58.301015   27181 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 21:55:58.301061   27181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 21:55:58.306850   27181 start.go:562] Will wait 60s for crictl version
	I0404 21:55:58.306898   27181 ssh_runner.go:195] Run: which crictl
	I0404 21:55:58.311205   27181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 21:55:58.353542   27181 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 21:55:58.353625   27181 ssh_runner.go:195] Run: crio --version
	I0404 21:55:58.386691   27181 ssh_runner.go:195] Run: crio --version
	I0404 21:55:58.419949   27181 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 21:55:58.421293   27181 main.go:141] libmachine: (ha-454952) Calling .GetIP
	I0404 21:55:58.424066   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:58.424556   27181 main.go:141] libmachine: (ha-454952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:86:be", ip: ""} in network mk-ha-454952: {Iface:virbr1 ExpiryTime:2024-04-04 22:44:17 +0000 UTC Type:0 Mac:52:54:00:39:86:be Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-454952 Clientid:01:52:54:00:39:86:be}
	I0404 21:55:58.424584   27181 main.go:141] libmachine: (ha-454952) DBG | domain ha-454952 has defined IP address 192.168.39.13 and MAC address 52:54:00:39:86:be in network mk-ha-454952
	I0404 21:55:58.424791   27181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 21:55:58.429819   27181 kubeadm.go:877] updating cluster {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 21:55:58.429968   27181 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:55:58.430009   27181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:55:58.476404   27181 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:55:58.476425   27181 crio.go:433] Images already preloaded, skipping extraction
	I0404 21:55:58.476473   27181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 21:55:58.511567   27181 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 21:55:58.511594   27181 cache_images.go:84] Images are preloaded, skipping loading
	I0404 21:55:58.511605   27181 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.29.3 crio true true} ...
	I0404 21:55:58.511729   27181 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-454952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 21:55:58.511805   27181 ssh_runner.go:195] Run: crio config
	I0404 21:55:58.564003   27181 cni.go:84] Creating CNI manager for ""
	I0404 21:55:58.564025   27181 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0404 21:55:58.564035   27181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 21:55:58.564064   27181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-454952 NodeName:ha-454952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 21:55:58.564230   27181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-454952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 21:55:58.564264   27181 kube-vip.go:111] generating kube-vip config ...
	I0404 21:55:58.564315   27181 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0404 21:55:58.576269   27181 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0404 21:55:58.576428   27181 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0404 21:55:58.576487   27181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 21:55:58.587007   27181 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 21:55:58.587076   27181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0404 21:55:58.597399   27181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0404 21:55:58.614841   27181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 21:55:58.632756   27181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0404 21:55:58.652474   27181 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0404 21:55:58.673658   27181 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0404 21:55:58.678100   27181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 21:55:58.836919   27181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 21:55:58.854024   27181 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952 for IP: 192.168.39.13
	I0404 21:55:58.854048   27181 certs.go:194] generating shared ca certs ...
	I0404 21:55:58.854064   27181 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:58.854276   27181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 21:55:58.854343   27181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 21:55:58.854357   27181 certs.go:256] generating profile certs ...
	I0404 21:55:58.854423   27181 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/client.key
	I0404 21:55:58.854449   27181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0
	I0404 21:55:58.854463   27181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13 192.168.39.60 192.168.39.217 192.168.39.254]
	I0404 21:55:59.063351   27181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 ...
	I0404 21:55:59.063382   27181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0: {Name:mk5433d65ebbc99dc168542c7e560d66181820c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:59.063542   27181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0 ...
	I0404 21:55:59.063554   27181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0: {Name:mk01ba783c7e8d0935e0f7a584b7b8848c4c01dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:55:59.063624   27181 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt.b258ffc0 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt
	I0404 21:55:59.063769   27181 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key.b258ffc0 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key
	I0404 21:55:59.063896   27181 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key
	I0404 21:55:59.063911   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 21:55:59.063927   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 21:55:59.063941   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 21:55:59.063951   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 21:55:59.063969   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 21:55:59.063981   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 21:55:59.063992   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 21:55:59.064003   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 21:55:59.064041   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 21:55:59.064066   27181 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 21:55:59.064075   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 21:55:59.064099   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 21:55:59.064137   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 21:55:59.064157   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 21:55:59.064194   27181 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 21:55:59.064217   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.064230   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.064242   27181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.064818   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 21:55:59.093429   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 21:55:59.119285   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 21:55:59.146917   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 21:55:59.173303   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 21:55:59.200852   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 21:55:59.227698   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 21:55:59.254649   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/ha-454952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 21:55:59.280699   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 21:55:59.307064   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 21:55:59.333591   27181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 21:55:59.359579   27181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 21:55:59.406083   27181 ssh_runner.go:195] Run: openssl version
	I0404 21:55:59.421057   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 21:55:59.434123   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.439412   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.439469   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 21:55:59.446015   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 21:55:59.456399   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 21:55:59.469371   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.474477   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.474541   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 21:55:59.480714   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 21:55:59.490801   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 21:55:59.502144   27181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.507930   27181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.507995   27181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 21:55:59.514189   27181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 21:55:59.524145   27181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 21:55:59.529373   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 21:55:59.535677   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 21:55:59.542836   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 21:55:59.549012   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 21:55:59.555011   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 21:55:59.561006   27181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 21:55:59.566941   27181 kubeadm.go:391] StartCluster: {Name:ha-454952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-454952 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.251 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:55:59.567070   27181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 21:55:59.567119   27181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 21:55:59.608244   27181 cri.go:89] found id: "40c7e5205a7f2c7dca97128c4b3db2baa0cb5a2a2d71e6e0041bb8dcc5e62085"
	I0404 21:55:59.608270   27181 cri.go:89] found id: "3f1f1315a8b9166afc7caab3311dbe513f1259bca49d23c86707d7c46cd90718"
	I0404 21:55:59.608276   27181 cri.go:89] found id: "f0e949f71327f502a0430c391da79a4e330beb8aa9171c6f6cc4c6f1627b6008"
	I0404 21:55:59.608281   27181 cri.go:89] found id: "9e1a578fe0ac02dadad448b39bd45569b6296c53aa66eab5d21d43b7572cd092"
	I0404 21:55:59.608285   27181 cri.go:89] found id: "f5da670e72260df047daddb872854e23d680c6e1ba40671362700eb4dcc9b43e"
	I0404 21:55:59.608293   27181 cri.go:89] found id: "a65041cfea93095250c3fdc69a6eb0688089d798ad9c23a920244aea4d408dbf"
	I0404 21:55:59.608297   27181 cri.go:89] found id: "2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f"
	I0404 21:55:59.608301   27181 cri.go:89] found id: "b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c"
	I0404 21:55:59.608304   27181 cri.go:89] found id: "90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05"
	I0404 21:55:59.608311   27181 cri.go:89] found id: "a0c8fa7da2804867788af45dab4db03cd14b20ee017b4626f4e256792a8e568d"
	I0404 21:55:59.608315   27181 cri.go:89] found id: "c3820dd8095443cb26bbe6c6105086f7a0af9455932f2fb37243fe160746ed65"
	I0404 21:55:59.608319   27181 cri.go:89] found id: "e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048"
	I0404 21:55:59.608333   27181 cri.go:89] found id: "a94e56804eb2ebb05d499b2c2006d7b844493f16ba5287a6abade07802f422e1"
	I0404 21:55:59.608341   27181 cri.go:89] found id: "72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3"
	I0404 21:55:59.608355   27181 cri.go:89] found id: ""
	I0404 21:55:59.608415   27181 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.128150481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712268061127836270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad1ac0f9-7d04-4bca-ad7f-c1d7201a5ad3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.128812466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c99a8ce3-26fa-4c3e-888b-a532e625dce5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.128886078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c99a8ce3-26fa-4c3e-888b-a532e625dce5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.129543310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c99a8ce3-26fa-4c3e-888b-a532e625dce5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.176221127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28d73347-f73b-4bc7-a559-70df958a6251 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.176322249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28d73347-f73b-4bc7-a559-70df958a6251 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.177579805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=193204f0-1da7-4a29-ba2c-31ff291b021b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.178616642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712268061178587487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=193204f0-1da7-4a29-ba2c-31ff291b021b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.179520427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88428170-66c1-480b-8afa-aa4f8109b325 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.179576934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88428170-66c1-480b-8afa-aa4f8109b325 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.180077278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88428170-66c1-480b-8afa-aa4f8109b325 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.246598951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87dcd611-75c1-439a-993d-51f90c9862ed name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.246724751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87dcd611-75c1-439a-993d-51f90c9862ed name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.248444840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f55a19ed-9a5e-4a25-a642-e72c74183115 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.249359494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712268061249332714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f55a19ed-9a5e-4a25-a642-e72c74183115 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.250002045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f18a6c1d-fa30-4748-907a-72a80badd890 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.250063758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f18a6c1d-fa30-4748-907a-72a80badd890 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.250463571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f18a6c1d-fa30-4748-907a-72a80badd890 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.299657263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc2b340b-ff98-4d69-948a-9c45cd6a3773 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.299815872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc2b340b-ff98-4d69-948a-9c45cd6a3773 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.302010486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b8dca65-958d-400b-9405-a158e7f26f58 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.302568882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712268061302528890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b8dca65-958d-400b-9405-a158e7f26f58 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.303498004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fa02c15-ea9c-48bf-b87d-ad01499d65c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.303575819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fa02c15-ea9c-48bf-b87d-ad01499d65c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:01:01 ha-454952 crio[3915]: time="2024-04-04 22:01:01.304166696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712267835730605441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6afa481405f5045f3db6cf7ae9618dad4ebdc6752c9305a9d7e373f249d727,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712267817726721259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712267808726201589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e953bf3c89fc497ba14cbf93c24765653b3b1037dd866ee0b68061b98fb218fe,PodSandboxId:01a1479528f6de61e52ce7f71e64ada2eca7c4d104340ce49ce430f677aba4d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712267798366173313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712267796105878325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a66b8b586a4fd72dbaee27006e88e5d9c3313bb6bd16f4c4106d12d9283940,PodSandboxId:ac16fed91142a3e463e8cedb0a1a1c7176c3085e18aa6f7c04aeaa29151181ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712267780537407952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9be7382aec2c5718d2c3ee2100157c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d,PodSandboxId:3bafbcd423401c441a4d165a06c80cbc3da18ddd50a1f9c2b73e6bc688e44b20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712267765689534579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:bc8218a5029f13c2ed80f0fe1a747232912e939468d12ff6da8d1f9801f45f46,PodSandboxId:be9d1c66a7fb28d26255e7f6d050c7669ce19c3899d8c7c980152682e3579987,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712267765537650432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8531ddb-fa9d-4efe-91cc-072e75a5897d,},Annotations:map[string]string{io.kubernetes.container.hash: 7ab74291,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee2070
be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419,PodSandboxId:af7f752f0deadce6c7aacd988726430e5da7845252eb8ed9d6700eb4ed71d0ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712267764855980645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v8wv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44250298-dce4-4e12-88c2-e347b4a63711,},Annotations:map[string]string{io.kubernetes.container.hash: e6b5b570,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9141f11f6629018a0b27dac80b27b6813a81e074
b69b7db8c3a549a51a5209,PodSandboxId:1ec2a54c59b2006f4b1df533f96f23078a144f176c5bc6defcee5c3573aa70c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764965596764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733,PodSandboxId:787cebef1eeefad70cbda0986a9cc3ea7642d152e9bd8ba2e4ad63e43caab690,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712267764849872069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4,PodSandboxId:3c2cbdd1490fdcf9c72d76cef8242cbaa9a85b77a409f21ce39f241b75d4f911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712267764733460583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-454952,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: ff0fb93d9d927cd07ff8b57eb1cfc5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b,PodSandboxId:8608a38b4396763bcc6a5420d72c8fed59879a6ec66859544003bcb6b11b11da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712267764672483098,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dc0414ca08a9713fe775ae3a7f4f8fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed56074,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a,PodSandboxId:7c0af8cf9edf4ba036b98efbfc79d4e1ca2707da7799dffbb99455fdd6921113,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712267764565452653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2
961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20,PodSandboxId:11ae4dcd2ed2a2582d43ba63f187be7aece2582a9c20654daf0b6dad6bd646c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712267764503471258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kube
rnetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85478f2f51e8940eda125b940661b2d41922995b38a8b994597da429a8e1761c,PodSandboxId:2c8e166c4509c3ab63f1656c060ddc371d7df608ede11736dc1c97b6b7735a02,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712267279351397601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-q56fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53780518-8100-4f1a-993c-fb9c76dfecb1,},Annotations:map[string]string{io.kuber
netes.container.hash: 5afdd25b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f,PodSandboxId:b1934889b30c3d076d1c5adf51f5dc0fda9a76d4af89bfa827ced69657250f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101615496775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qsz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af3d10e-47b7-439c-80e3-8ee328d87f16,},Annotations:map[string]string{io.kubernetes.container.hash: e3c5e107,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c,PodSandboxId:0b786dbf91033b4a759bcbfbf788250cfb59102a2a08d354f4813b7a7aac6505,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712267101585362292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-hsdfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0384e31-ec4b-4b09-b387-5bce7a36b688,},Annotations:map[string]string{io.kubernetes.container.hash: 549fa4a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05,PodSandboxId:2748de75b7d2d2c4dc7badac77b11e2ac4212a65164ce75513893475ca09f389,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712267099360246586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60759cb6-a394-4e3e-a19e-f9b7c92a19db,},Annotations:map[string]string{io.kubernetes.container.hash: 235e7201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048,PodSandboxId:9f1d5c3d0af967ded761e3385a76c7fa74da5831c5d09a97fde5cbb8a628afc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712267079628250088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9e1a8a12476c92b69f2961660a6fad,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3,PodSandboxId:92d02e4d213b303750f6e7ff559242dcb3f484ab75b198a00e098079b4fb83c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1712267079596518755,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-454952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60f573734866339a188d0d844e2d7d82,},Annotations:map[string]string{io.kubernetes.container.hash: d483ae82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fa02c15-ea9c-48bf-b87d-ad01499d65c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a6cc4d61d02b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               4                   af7f752f0dead       kindnet-v8wv6
	5c6afa481405f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   be9d1c66a7fb2       storage-provisioner
	9b188c8442602       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   3c2cbdd1490fd       kube-controller-manager-ha-454952
	e953bf3c89fc4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   01a1479528f6d       busybox-7fdf7869d9-q56fw
	b0e409496e2bf       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            3                   8608a38b43967       kube-apiserver-ha-454952
	71a66b8b586a4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   ac16fed91142a       kube-vip-ha-454952
	a3a568a83338d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      4 minutes ago       Running             kube-proxy                1                   3bafbcd423401       kube-proxy-gjvm9
	bc8218a5029f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   be9d1c66a7fb2       storage-provisioner
	2a9141f11f662       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   1ec2a54c59b20       coredns-76f75df574-9qsz7
	eee2070be0d0e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Exited              kindnet-cni               3                   af7f752f0dead       kindnet-v8wv6
	a678a5fd4129c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   787cebef1eeef       coredns-76f75df574-hsdfw
	b9b43e4ef90fb       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Exited              kube-controller-manager   1                   3c2cbdd1490fd       kube-controller-manager-ha-454952
	9ee36899c21ef       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Exited              kube-apiserver            2                   8608a38b43967       kube-apiserver-ha-454952
	aeb3e4500f198       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      4 minutes ago       Running             kube-scheduler            1                   7c0af8cf9edf4       kube-scheduler-ha-454952
	ebbea1fa16613       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   11ae4dcd2ed2a       etcd-ha-454952
	85478f2f51e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   2c8e166c4509c       busybox-7fdf7869d9-q56fw
	2f6afcac0a6b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   b1934889b30c3       coredns-76f75df574-9qsz7
	b3fc8d8ef023d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   0b786dbf91033       coredns-76f75df574-hsdfw
	90c39a2687464       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      16 minutes ago      Exited              kube-proxy                0                   2748de75b7d2d       kube-proxy-gjvm9
	e9faec0816d4c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      16 minutes ago      Exited              kube-scheduler            0                   9f1d5c3d0af96       kube-scheduler-ha-454952
	72549bccc4ca2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   92d02e4d213b3       etcd-ha-454952
	
	
	==> coredns [2a9141f11f6629018a0b27dac80b27b6813a81e074b69b7db8c3a549a51a5209] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[236522688]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.680) (total time: 10162ms):
	Trace[236522688]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer 10162ms (21:56:26.843)
	Trace[236522688]: [10.162687737s] [10.162687737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56616->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1143492343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.658) (total time: 10286ms):
	Trace[1143492343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer 10286ms (21:56:26.945)
	Trace[1143492343]: [10.286644886s] [10.286644886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52848->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2f6afcac0a6b1b11aed92685dc75ff4ab9aa463b33a234c4e38dcaff4c67d85f] <==
	[INFO] 10.244.2.2:49348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146448s
	[INFO] 10.244.2.2:48867 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138618s
	[INFO] 10.244.0.4:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070304s
	[INFO] 10.244.1.2:58936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144716s
	[INFO] 10.244.1.2:43170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002050369s
	[INFO] 10.244.1.2:59811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149418s
	[INFO] 10.244.1.2:58173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001389488s
	[INFO] 10.244.1.2:50742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078385s
	[INFO] 10.244.1.2:46973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077499s
	[INFO] 10.244.2.2:43785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153069s
	[INFO] 10.244.2.2:37406 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074939s
	[INFO] 10.244.0.4:41091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141133s
	[INFO] 10.244.0.4:44476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202801s
	[INFO] 10.244.0.4:45234 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104556s
	[INFO] 10.244.1.2:39647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182075s
	[INFO] 10.244.1.2:50588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151414s
	[INFO] 10.244.1.2:41606 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195991s
	[INFO] 10.244.2.2:53483 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232191s
	[INFO] 10.244.2.2:60437 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132599s
	[INFO] 10.244.1.2:51965 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166052s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a678a5fd4129c9d5ed065e6d6cf82de15766bc944d4ba612411d966c79dea733] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[457864532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:10.069) (total time: 10001ms):
	Trace[457864532]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:56:20.070)
	Trace[457864532]: [10.001189318s] [10.001189318s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2020089354]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Apr-2024 21:56:16.706) (total time: 10238ms):
	Trace[2020089354]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer 10238ms (21:56:26.945)
	Trace[2020089354]: [10.238860965s] [10.238860965s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35720->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b3fc8d8ef023dc68cb719490a898c0bd77afc3ff3958e7132c1f3a6349b4f49c] <==
	[INFO] 10.244.0.4:51293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085331s
	[INFO] 10.244.0.4:55321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087493s
	[INFO] 10.244.0.4:59685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001579648s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157393s
	[INFO] 10.244.0.4:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109886s
	[INFO] 10.244.1.2:59156 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010739s
	[INFO] 10.244.1.2:53747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144738s
	[INFO] 10.244.2.2:48166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144032s
	[INFO] 10.244.2.2:36301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211342s
	[INFO] 10.244.0.4:34383 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072486s
	[INFO] 10.244.1.2:47623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275299s
	[INFO] 10.244.2.2:36199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000346157s
	[INFO] 10.244.2.2:51401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193332s
	[INFO] 10.244.0.4:48691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082711s
	[INFO] 10.244.0.4:37702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047018s
	[INFO] 10.244.0.4:59456 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.0.4:56014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070317s
	[INFO] 10.244.1.2:47145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204326s
	[INFO] 10.244.1.2:36898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127022s
	[INFO] 10.244.1.2:42608 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109931s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-454952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T21_44_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:56:43 +0000   Thu, 04 Apr 2024 21:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    ha-454952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bcaf06686d84ca785ca1e79fc3ee92b
	  System UUID:                9bcaf066-86d8-4ca7-85ca-1e79fc3ee92b
	  Boot ID:                    00b02ff9-8c43-4004-ab1c-4fcde5b8a674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-q56fw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-9qsz7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-hsdfw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-454952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-v8wv6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-454952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-454952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-gjvm9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-454952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-454952                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m12s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-454952 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-454952 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-454952 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-454952 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Warning  ContainerGCFailed        5m15s (x2 over 6m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-454952 event: Registered Node ha-454952 in Controller
	
	
	Name:               ha-454952-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_46_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:46:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 21:59:40 +0000   Thu, 04 Apr 2024 21:59:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 21:59:40 +0000   Thu, 04 Apr 2024 21:59:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 21:59:40 +0000   Thu, 04 Apr 2024 21:59:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 21:59:40 +0000   Thu, 04 Apr 2024 21:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-454952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f458ea60975d458aa9cb6e203993b49a
	  System UUID:                f458ea60-975d-458a-a9cb-6e203993b49a
	  Boot ID:                    c363bc48-d9ed-42ea-b93d-193390f6e28a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rshl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-454952-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7c9dv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-454952-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-454952-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6nkxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-454952-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-454952-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-454952-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node ha-454952-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node ha-454952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-454952-m02 event: Registered Node ha-454952-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-454952-m02 status is now: NodeNotReady
	
	
	Name:               ha-454952-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-454952-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=ha-454952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T21_48_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 21:48:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-454952-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 21:58:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:59:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:59:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:59:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 04 Apr 2024 21:58:12 +0000   Thu, 04 Apr 2024 21:59:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-454952-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eaf323303c74873975b4953c592319b
	  System UUID:                0eaf3233-03c7-4873-975b-4953c592319b
	  Boot ID:                    cf21f544-0ad6-43b6-a6f9-d781e0417766
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-f76nj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-6mmgj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-5j62j            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-454952-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-454952-m04 event: Registered Node ha-454952-m04 in Controller
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-454952-m04 has been rebooted, boot id: cf21f544-0ad6-43b6-a6f9-d781e0417766
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeReady                2m49s                  kubelet          Node ha-454952-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m49s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m49s)  kubelet          Node ha-454952-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m49s)  kubelet          Node ha-454952-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             107s (x2 over 3m27s)   node-controller  Node ha-454952-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060191] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.177107] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.307912] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.603000] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.064613] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478091] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.520027] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.408849] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.092051] kauditd_printk_skb: 49 callbacks suppressed
	[ +12.761594] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 21:46] kauditd_printk_skb: 76 callbacks suppressed
	[Apr 4 21:52] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 4 21:55] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +0.166701] systemd-fstab-generator[3846]: Ignoring "noauto" option for root device
	[  +0.186909] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.154277] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.306196] systemd-fstab-generator[3900]: Ignoring "noauto" option for root device
	[  +4.229874] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	[  +0.091286] kauditd_printk_skb: 100 callbacks suppressed
	[Apr 4 21:56] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.636937] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.074127] kauditd_printk_skb: 1 callbacks suppressed
	[ +20.014370] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.552840] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [72549bccc4ca208cc32ef4097c6e445e303a84fb0b57a20dd7f1ebf7e7a9b1e3] <==
	{"level":"warn","ts":"2024-04-04T21:54:22.156004Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.448653Z","time spent":"707.343465ms","remote":"127.0.0.1:43708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-04T21:54:22.155543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.453013Z","time spent":"702.51763ms","remote":"127.0.0.1:43684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-04T21:54:22.155458Z","caller":"traceutil/trace.go:171","msg":"trace[819982146] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"386.57806ms","start":"2024-04-04T21:54:21.768875Z","end":"2024-04-04T21:54:22.155453Z","steps":["trace[819982146] 'agreement among raft nodes before linearized reading'  (duration: 368.683042ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T21:54:22.156437Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T21:54:21.768842Z","time spent":"387.586118ms","remote":"127.0.0.1:43616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:500 "}
	2024/04/04 21:54:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-04T21:54:22.281431Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-04T21:54:22.281662Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281814Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281866Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281918Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.281984Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.282019Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"34931ad6304bb19a"}
	{"level":"info","ts":"2024-04-04T21:54:22.282027Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282037Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282056Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.28215Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282233Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282295Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.282333Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:54:22.285495Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-04T21:54:22.285778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-04T21:54:22.285821Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-454952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	
	
	==> etcd [ebbea1fa16613e48ca7af74cdbecc207ba6536f8a1410b2c3af9fee06da42d20] <==
	{"level":"info","ts":"2024-04-04T21:57:27.424253Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:57:27.453604Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:57:27.453727Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-04T21:57:27.4581Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:57:27.460787Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-04T21:57:27.460995Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.217:45170","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-04T21:57:44.059947Z","caller":"traceutil/trace.go:171","msg":"trace[314385097] transaction","detail":"{read_only:false; response_revision:2442; number_of_response:1; }","duration":"116.761136ms","start":"2024-04-04T21:57:43.943139Z","end":"2024-04-04T21:57:44.0599Z","steps":["trace[314385097] 'process raft request'  (duration: 116.425736ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T21:58:27.216386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd switched to configuration voters=(2107607927902620621 3788401218784309658)"}
	{"level":"info","ts":"2024-04-04T21:58:27.219333Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","removed-remote-peer-id":"409f8332ca29f5e9","removed-remote-peer-urls":["https://192.168.39.217:2380"]}
	{"level":"info","ts":"2024-04-04T21:58:27.219465Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.220414Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:58:27.220472Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.220776Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:58:27.220815Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.220856Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"1d3fba3e6c6ecbcd","removed-member-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.220975Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-04-04T21:58:27.221214Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.221432Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","error":"context canceled"}
	{"level":"warn","ts":"2024-04-04T21:58:27.221487Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"409f8332ca29f5e9","error":"failed to read 409f8332ca29f5e9 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-04T21:58:27.221516Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.221768Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9","error":"context canceled"}
	{"level":"info","ts":"2024-04-04T21:58:27.221808Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:58:27.221824Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"409f8332ca29f5e9"}
	{"level":"info","ts":"2024-04-04T21:58:27.221845Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"1d3fba3e6c6ecbcd","removed-remote-peer-id":"409f8332ca29f5e9"}
	{"level":"warn","ts":"2024-04-04T21:58:27.246017Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"1d3fba3e6c6ecbcd","remote-peer-id-stream-handler":"1d3fba3e6c6ecbcd","remote-peer-id-from":"409f8332ca29f5e9"}
	
	
	==> kernel <==
	 22:01:02 up 16 min,  0 users,  load average: 0.50, 0.45, 0.37
	Linux ha-454952 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4a6cc4d61d02bd1b7fb2857ec3294e5d9a2bdb3ee135480853f3cc50e864ba0e] <==
	I0404 22:00:17.058565       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 22:00:27.076181       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 22:00:27.076350       1 main.go:227] handling current node
	I0404 22:00:27.076399       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 22:00:27.076423       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 22:00:27.076603       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 22:00:27.076640       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 22:00:37.093057       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 22:00:37.093111       1 main.go:227] handling current node
	I0404 22:00:37.093128       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 22:00:37.093134       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 22:00:37.093464       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 22:00:37.093504       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 22:00:47.104296       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 22:00:47.104393       1 main.go:227] handling current node
	I0404 22:00:47.104422       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 22:00:47.104441       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 22:00:47.104867       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 22:00:47.105040       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	I0404 22:00:57.112996       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I0404 22:00:57.113166       1 main.go:227] handling current node
	I0404 22:00:57.113203       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0404 22:00:57.113313       1 main.go:250] Node ha-454952-m02 has CIDR [10.244.1.0/24] 
	I0404 22:00:57.113487       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0404 22:00:57.113542       1 main.go:250] Node ha-454952-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419] <==
	I0404 21:56:05.482543       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0404 21:56:05.482791       1 main.go:107] hostIP = 192.168.39.13
	podIP = 192.168.39.13
	I0404 21:56:05.483026       1 main.go:116] setting mtu 1500 for CNI 
	I0404 21:56:05.483075       1 main.go:146] kindnetd IP family: "ipv4"
	I0404 21:56:05.483118       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0404 21:56:08.514431       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0404 21:56:11.585226       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0404 21:56:22.591301       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0404 21:56:26.945200       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.117:36178->10.96.0.1:443: read: connection reset by peer
	I0404 21:56:29.946371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [9ee36899c21ef1e958722bdf5f5d9a1353202a8067eb41c81e4cdd8fe7c8129b] <==
	I0404 21:56:05.365776       1 options.go:222] external host was not specified, using 192.168.39.13
	I0404 21:56:05.370872       1 server.go:148] Version: v1.29.3
	I0404 21:56:05.370929       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:05.810788       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0404 21:56:05.832748       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0404 21:56:05.833751       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0404 21:56:05.834067       1 instance.go:297] Using reconciler: lease
	W0404 21:56:25.800806       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0404 21:56:25.800806       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0404 21:56:25.835801       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b0e409496e2bfa6083bb1dd353fce8e7415c5543ae14bb9bccdf87e80d8ddec7] <==
	I0404 21:56:38.027151       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0404 21:56:38.027348       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0404 21:56:38.046462       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 21:56:38.048774       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 21:56:38.109607       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 21:56:38.120578       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0404 21:56:38.121061       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 21:56:38.123020       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 21:56:38.123112       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 21:56:38.123891       1 aggregator.go:165] initial CRD sync complete...
	I0404 21:56:38.123931       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 21:56:38.123938       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 21:56:38.123943       1 cache.go:39] Caches are synced for autoregister controller
	I0404 21:56:38.124567       1 shared_informer.go:318] Caches are synced for configmaps
	I0404 21:56:38.128277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 21:56:38.128342       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0404 21:56:38.128366       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0404 21:56:38.206521       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0404 21:56:38.208294       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 21:56:38.221637       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0404 21:56:38.227999       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0404 21:56:39.032221       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0404 21:56:40.262359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.217]
	W0404 21:56:50.252496       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.60]
	W0404 21:58:40.261418       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13 192.168.39.60]
	
	
	==> kube-controller-manager [9b188c8442602f8efd7eb8e9047476741c8c5c865ede423661d758bb83b6b05c] <==
	I0404 21:59:14.564131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.415µs"
	I0404 21:59:14.649290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="16.462135ms"
	I0404 21:59:14.650635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="72.226µs"
	E0404 21:59:20.820883       1 gc_controller.go:153] "Failed to get node" err="node \"ha-454952-m03\" not found" node="ha-454952-m03"
	E0404 21:59:20.821035       1 gc_controller.go:153] "Failed to get node" err="node \"ha-454952-m03\" not found" node="ha-454952-m03"
	E0404 21:59:20.821064       1 gc_controller.go:153] "Failed to get node" err="node \"ha-454952-m03\" not found" node="ha-454952-m03"
	E0404 21:59:20.821089       1 gc_controller.go:153] "Failed to get node" err="node \"ha-454952-m03\" not found" node="ha-454952-m03"
	E0404 21:59:20.821112       1 gc_controller.go:153] "Failed to get node" err="node \"ha-454952-m03\" not found" node="ha-454952-m03"
	I0404 21:59:20.838122       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-scheduler-ha-454952-m03"
	I0404 21:59:20.872961       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-scheduler-ha-454952-m03"
	I0404 21:59:20.873272       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-vip-ha-454952-m03"
	I0404 21:59:20.908389       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-vip-ha-454952-m03"
	I0404 21:59:20.908435       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-apiserver-ha-454952-m03"
	I0404 21:59:20.944586       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-apiserver-ha-454952-m03"
	I0404 21:59:20.944787       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-fl4jh"
	I0404 21:59:20.981000       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-fl4jh"
	I0404 21:59:20.981404       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-controller-manager-ha-454952-m03"
	I0404 21:59:21.019741       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-controller-manager-ha-454952-m03"
	I0404 21:59:21.019928       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-7v9fp"
	I0404 21:59:21.057614       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-7v9fp"
	I0404 21:59:21.057897       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/etcd-ha-454952-m03"
	I0404 21:59:21.089075       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/etcd-ha-454952-m03"
	I0404 21:59:32.596274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.278356ms"
	I0404 21:59:32.596372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.683µs"
	I0404 21:59:40.842219       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-rshl2" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-rshl2"
	
	
	==> kube-controller-manager [b9b43e4ef90fb190c80b32153e55f8d5e4d07a57a5a7f58e8ae3270c59a5b7a4] <==
	I0404 21:56:06.415258       1 serving.go:380] Generated self-signed cert in-memory
	I0404 21:56:06.745295       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0404 21:56:06.745394       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:06.747865       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 21:56:06.748131       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 21:56:06.748279       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0404 21:56:06.749115       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0404 21:56:26.843820       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.13:8443/healthz\": dial tcp 192.168.39.13:8443: connect: connection refused"
	
	
	==> kube-proxy [90c39a2687464a984654b859b10c52f7e4ffa3d4ae3974de2b81bc30ab98fb05] <==
	E0404 21:53:03.233467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:03.233607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:03.233630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:03.233631       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:03.233899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081442       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:10.081561       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:10.081663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.170648       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.170925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.171475       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.171535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:19.171967       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:19.172100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:37.601976       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:37.602227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-454952&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:40.673310       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:40.674356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:53:40.674289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:53:40.674433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0404 21:54:20.609491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0404 21:54:20.609597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a3a568a83338d7ba5e0ad5f31df5071f6aa7c2b4eb6f48a92c98d29ac8bd266d] <==
	I0404 21:56:06.692184       1 server_others.go:72] "Using iptables proxy"
	E0404 21:56:08.129489       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:11.201400       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:14.273934       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:20.417563       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0404 21:56:32.706530       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-454952\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0404 21:56:48.887588       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0404 21:56:48.970991       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 21:56:48.971133       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 21:56:48.971490       1 server_others.go:168] "Using iptables Proxier"
	I0404 21:56:48.976518       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 21:56:48.976922       1 server.go:865] "Version info" version="v1.29.3"
	I0404 21:56:48.977000       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 21:56:48.980202       1 config.go:188] "Starting service config controller"
	I0404 21:56:48.980543       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 21:56:48.980646       1 config.go:97] "Starting endpoint slice config controller"
	I0404 21:56:48.980787       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 21:56:48.982090       1 config.go:315] "Starting node config controller"
	I0404 21:56:48.986764       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 21:56:49.081505       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0404 21:56:49.081601       1 shared_informer.go:318] Caches are synced for service config
	I0404 21:56:49.086845       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [aeb3e4500f1988eec953c2f90b45fb2e4ce58d37fcb5a8d83b033102bfb7557a] <==
	W0404 21:56:35.248404       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.13:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.248525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.13:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.406035       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.13:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.406155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.13:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.509436       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.13:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.509548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.13:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.662102       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.39.13:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.662233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.13:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:35.989472       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.13:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	E0404 21:56:35.989531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.13:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	W0404 21:56:38.064090       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 21:56:38.065143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 21:56:38.065279       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 21:56:38.066475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 21:56:38.066820       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 21:56:38.067047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0404 21:56:42.351372       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0404 21:58:23.903276       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-l8xx8\": pod busybox-7fdf7869d9-l8xx8 is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-l8xx8" node="ha-454952-m04"
	E0404 21:58:23.903807       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 218e1b9b-0930-4069-9679-b0904cdd1295(default/busybox-7fdf7869d9-l8xx8) wasn't assumed so cannot be forgotten"
	E0404 21:58:23.904086       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-l8xx8\": pod busybox-7fdf7869d9-l8xx8 is already assigned to node \"ha-454952-m04\"" pod="default/busybox-7fdf7869d9-l8xx8"
	I0404 21:58:23.904467       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-l8xx8" node="ha-454952-m04"
	E0404 21:58:25.392152       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-f76nj\": pod busybox-7fdf7869d9-f76nj is already assigned to node \"ha-454952-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-f76nj" node="ha-454952-m04"
	E0404 21:58:25.392554       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod f1879ea4-d273-4900-af0f-2f8ca528a842(default/busybox-7fdf7869d9-f76nj) wasn't assumed so cannot be forgotten"
	E0404 21:58:25.392650       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-f76nj\": pod busybox-7fdf7869d9-f76nj is already assigned to node \"ha-454952-m04\"" pod="default/busybox-7fdf7869d9-f76nj"
	I0404 21:58:25.392742       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-f76nj" node="ha-454952-m04"
	
	
	==> kube-scheduler [e9faec0816d4c0f45d3b111057a6c5dbfcc28b5ab20bd4a8162ff6e686181048] <==
	E0404 21:54:18.397350       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 21:54:18.493612       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 21:54:18.493811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0404 21:54:18.757833       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 21:54:18.757979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 21:54:18.798525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0404 21:54:18.798625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0404 21:54:18.992861       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0404 21:54:18.992958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0404 21:54:19.084774       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0404 21:54:19.084902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0404 21:54:19.291443       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0404 21:54:19.291553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0404 21:54:19.455413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0404 21:54:19.455444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0404 21:54:20.128871       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 21:54:20.128992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 21:54:20.146928       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 21:54:20.147044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 21:54:20.493305       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0404 21:54:20.493411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0404 21:54:22.113581       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0404 21:54:22.118600       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0404 21:54:22.119245       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0404 21:54:22.119560       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 04 21:57:00 ha-454952 kubelet[1393]: E0404 21:57:00.699949    1393 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-v8wv6_kube-system(44250298-dce4-4e12-88c2-e347b4a63711)\"" pod="kube-system/kindnet-v8wv6" podUID="44250298-dce4-4e12-88c2-e347b4a63711"
	Apr 04 21:57:07 ha-454952 kubelet[1393]: I0404 21:57:07.761891    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-q56fw" podStartSLOduration=550.52905852 podStartE2EDuration="9m12.761790564s" podCreationTimestamp="2024-04-04 21:47:55 +0000 UTC" firstStartedPulling="2024-04-04 21:47:57.100656675 +0000 UTC m=+190.614780474" lastFinishedPulling="2024-04-04 21:47:59.333388732 +0000 UTC m=+192.847512518" observedRunningTime="2024-04-04 21:47:59.62197176 +0000 UTC m=+193.136095571" watchObservedRunningTime="2024-04-04 21:57:07.761790564 +0000 UTC m=+741.275914362"
	Apr 04 21:57:15 ha-454952 kubelet[1393]: I0404 21:57:15.699511    1393 scope.go:117] "RemoveContainer" containerID="eee2070be0d0e32d5b9dc3cb7eefe1b073f9c9c4b3c42b4252f403ad3e059419"
	Apr 04 21:57:27 ha-454952 kubelet[1393]: I0404 21:57:27.700238    1393 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-454952" podUID="87898a7a-a2df-46cf-8b58-f6e5ca4e5f7b"
	Apr 04 21:57:27 ha-454952 kubelet[1393]: I0404 21:57:27.723150    1393 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-454952"
	Apr 04 21:57:46 ha-454952 kubelet[1393]: E0404 21:57:46.749567    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:57:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:57:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:58:46 ha-454952 kubelet[1393]: E0404 21:58:46.746920    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:58:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:58:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:58:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:58:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 21:59:46 ha-454952 kubelet[1393]: E0404 21:59:46.746123    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 21:59:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 21:59:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 21:59:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 21:59:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 22:00:46 ha-454952 kubelet[1393]: E0404 22:00:46.746383    1393 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 22:00:46 ha-454952 kubelet[1393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 22:00:46 ha-454952 kubelet[1393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 22:00:46 ha-454952 kubelet[1393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 22:00:46 ha-454952 kubelet[1393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:01:00.780996   29110 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16143-5297/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-454952 -n ha-454952
helpers_test.go:261: (dbg) Run:  kubectl --context ha-454952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (315.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575162
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-575162
E0404 22:16:53.530086   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:18:09.142216   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-575162: exit status 82 (2m2.074337404s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-575162-m03"  ...
	* Stopping node "multinode-575162-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-575162" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575162 --wait=true -v=8 --alsologtostderr
E0404 22:18:50.480841   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:21:12.187508   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575162 --wait=true -v=8 --alsologtostderr: (3m10.591082299s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575162
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-575162 -n multinode-575162
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-575162 logs -n 25: (1.626278054s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162:/home/docker/cp-test_multinode-575162-m02_multinode-575162.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162 sudo cat                                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m02_multinode-575162.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03:/home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162-m03 sudo cat                                   | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp testdata/cp-test.txt                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162:/home/docker/cp-test_multinode-575162-m03_multinode-575162.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162 sudo cat                                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02:/home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162-m02 sudo cat                                   | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-575162 node stop m03                                                          | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	| node    | multinode-575162 node start                                                             | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| stop    | -p multinode-575162                                                                     | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| start   | -p multinode-575162                                                                     | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:18 UTC | 04 Apr 24 22:21 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:18:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:18:16.113826   37825 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:18:16.114087   37825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:18:16.114098   37825 out.go:304] Setting ErrFile to fd 2...
	I0404 22:18:16.114102   37825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:18:16.114270   37825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:18:16.114784   37825 out.go:298] Setting JSON to false
	I0404 22:18:16.115728   37825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3642,"bootTime":1712265455,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:18:16.115798   37825 start.go:139] virtualization: kvm guest
	I0404 22:18:16.118766   37825 out.go:177] * [multinode-575162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:18:16.121039   37825 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:18:16.120997   37825 notify.go:220] Checking for updates...
	I0404 22:18:16.122829   37825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:18:16.124498   37825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:18:16.126164   37825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:18:16.127893   37825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:18:16.129586   37825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:18:16.131992   37825 config.go:182] Loaded profile config "multinode-575162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:18:16.132136   37825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:18:16.132771   37825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:18:16.132821   37825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:18:16.149152   37825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0404 22:18:16.149553   37825 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:18:16.150214   37825 main.go:141] libmachine: Using API Version  1
	I0404 22:18:16.150246   37825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:18:16.150605   37825 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:18:16.150873   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.188758   37825 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:18:16.190536   37825 start.go:297] selected driver: kvm2
	I0404 22:18:16.190559   37825 start.go:901] validating driver "kvm2" against &{Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:18:16.190730   37825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:18:16.191140   37825 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:18:16.191230   37825 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:18:16.207425   37825 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:18:16.208113   37825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:18:16.208215   37825 cni.go:84] Creating CNI manager for ""
	I0404 22:18:16.208230   37825 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0404 22:18:16.208309   37825 start.go:340] cluster config:
	{Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:18:16.208464   37825 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:18:16.210727   37825 out.go:177] * Starting "multinode-575162" primary control-plane node in "multinode-575162" cluster
	I0404 22:18:16.212423   37825 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:18:16.212472   37825 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 22:18:16.212482   37825 cache.go:56] Caching tarball of preloaded images
	I0404 22:18:16.212634   37825 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:18:16.212656   37825 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 22:18:16.212820   37825 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/config.json ...
	I0404 22:18:16.213048   37825 start.go:360] acquireMachinesLock for multinode-575162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:18:16.213104   37825 start.go:364] duration metric: took 35.313µs to acquireMachinesLock for "multinode-575162"
	I0404 22:18:16.213120   37825 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:18:16.213125   37825 fix.go:54] fixHost starting: 
	I0404 22:18:16.213413   37825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:18:16.213448   37825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:18:16.228280   37825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0404 22:18:16.228752   37825 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:18:16.229302   37825 main.go:141] libmachine: Using API Version  1
	I0404 22:18:16.229330   37825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:18:16.229674   37825 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:18:16.229938   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.230095   37825 main.go:141] libmachine: (multinode-575162) Calling .GetState
	I0404 22:18:16.232025   37825 fix.go:112] recreateIfNeeded on multinode-575162: state=Running err=<nil>
	W0404 22:18:16.232051   37825 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:18:16.234548   37825 out.go:177] * Updating the running kvm2 "multinode-575162" VM ...
	I0404 22:18:16.236248   37825 machine.go:94] provisionDockerMachine start ...
	I0404 22:18:16.236271   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.236456   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.239103   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.239630   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.239650   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.239804   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.239980   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.240176   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.240325   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.240515   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.240697   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.240707   37825 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:18:16.354024   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-575162
	
	I0404 22:18:16.354058   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.354308   37825 buildroot.go:166] provisioning hostname "multinode-575162"
	I0404 22:18:16.354340   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.354590   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.357851   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.358338   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.358372   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.358507   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.358733   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.358945   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.359094   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.359263   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.359482   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.359502   37825 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-575162 && echo "multinode-575162" | sudo tee /etc/hostname
	I0404 22:18:16.484980   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-575162
	
	I0404 22:18:16.485047   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.487952   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.488433   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.488466   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.488705   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.488909   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.489118   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.489325   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.489516   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.489713   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.489731   37825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-575162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-575162/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-575162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:18:16.597837   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:18:16.597863   37825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:18:16.597910   37825 buildroot.go:174] setting up certificates
	I0404 22:18:16.597921   37825 provision.go:84] configureAuth start
	I0404 22:18:16.597932   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.598269   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:18:16.601285   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.601796   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.601818   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.602092   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.604618   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.605041   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.605071   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.605203   37825 provision.go:143] copyHostCerts
	I0404 22:18:16.605226   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:18:16.605257   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:18:16.605266   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:18:16.605328   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:18:16.605482   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:18:16.605515   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:18:16.605526   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:18:16.605572   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:18:16.605643   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:18:16.605666   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:18:16.605676   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:18:16.605714   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:18:16.605824   37825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.multinode-575162 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-575162]
	I0404 22:18:16.702652   37825 provision.go:177] copyRemoteCerts
	I0404 22:18:16.702718   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:18:16.702740   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.705943   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.706453   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.706495   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.706761   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.706973   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.707209   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.707376   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:18:16.803957   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 22:18:16.804042   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:18:16.834039   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 22:18:16.834112   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0404 22:18:16.875473   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 22:18:16.875550   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:18:16.902409   37825 provision.go:87] duration metric: took 304.474569ms to configureAuth
	I0404 22:18:16.902444   37825 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:18:16.902676   37825 config.go:182] Loaded profile config "multinode-575162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:18:16.902769   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.906167   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.906653   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.906690   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.906870   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.907137   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.907354   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.907528   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.907705   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.907859   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.907873   37825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:19:47.643994   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:19:47.644026   37825 machine.go:97] duration metric: took 1m31.407760177s to provisionDockerMachine
	I0404 22:19:47.644057   37825 start.go:293] postStartSetup for "multinode-575162" (driver="kvm2")
	I0404 22:19:47.644077   37825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:19:47.644101   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.644476   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:19:47.644505   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.647785   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.648256   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.648292   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.648512   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.648703   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.648864   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.649062   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.737469   37825 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:19:47.742069   37825 command_runner.go:130] > NAME=Buildroot
	I0404 22:19:47.742091   37825 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0404 22:19:47.742097   37825 command_runner.go:130] > ID=buildroot
	I0404 22:19:47.742104   37825 command_runner.go:130] > VERSION_ID=2023.02.9
	I0404 22:19:47.742112   37825 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0404 22:19:47.742149   37825 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:19:47.742163   37825 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:19:47.742239   37825 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:19:47.742322   37825 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:19:47.742333   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 22:19:47.742412   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:19:47.753182   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:19:47.780580   37825 start.go:296] duration metric: took 136.502653ms for postStartSetup
	I0404 22:19:47.780621   37825 fix.go:56] duration metric: took 1m31.567495889s for fixHost
	I0404 22:19:47.780641   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.783445   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.783880   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.783919   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.784076   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.784283   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.784451   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.784590   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.784720   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:19:47.784871   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:19:47.784881   37825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:19:47.901537   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712269187.882279018
	
	I0404 22:19:47.901564   37825 fix.go:216] guest clock: 1712269187.882279018
	I0404 22:19:47.901574   37825 fix.go:229] Guest: 2024-04-04 22:19:47.882279018 +0000 UTC Remote: 2024-04-04 22:19:47.780625428 +0000 UTC m=+91.716975930 (delta=101.65359ms)
	I0404 22:19:47.901601   37825 fix.go:200] guest clock delta is within tolerance: 101.65359ms
	I0404 22:19:47.901606   37825 start.go:83] releasing machines lock for "multinode-575162", held for 1m31.688491214s
	I0404 22:19:47.901623   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.901951   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:19:47.904881   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.905260   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.905296   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.905460   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906046   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906227   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906317   37825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:19:47.906367   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.906415   37825 ssh_runner.go:195] Run: cat /version.json
	I0404 22:19:47.906442   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.908762   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.908972   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909114   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.909138   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909309   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.909354   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.909384   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909499   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.909571   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.909657   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.909727   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.909777   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.909848   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.909945   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.985919   37825 command_runner.go:130] > {"iso_version": "v1.33.0-1712138767-18566", "kicbase_version": "v0.0.43-1711559786-18485", "minikube_version": "v1.33.0-beta.0", "commit": "5c97bd855810b9924fd5c0368bb36a4a341f7234"}
	I0404 22:19:47.986122   37825 ssh_runner.go:195] Run: systemctl --version
	I0404 22:19:48.021526   37825 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0404 22:19:48.022205   37825 command_runner.go:130] > systemd 252 (252)
	I0404 22:19:48.022240   37825 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0404 22:19:48.022310   37825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:19:48.186613   37825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0404 22:19:48.195413   37825 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0404 22:19:48.195459   37825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:19:48.195509   37825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:19:48.206186   37825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0404 22:19:48.206214   37825 start.go:494] detecting cgroup driver to use...
	I0404 22:19:48.206299   37825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:19:48.227089   37825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:19:48.242398   37825 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:19:48.242466   37825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:19:48.257288   37825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:19:48.272558   37825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:19:48.432045   37825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:19:48.591283   37825 docker.go:233] disabling docker service ...
	I0404 22:19:48.591358   37825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:19:48.612695   37825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:19:48.627975   37825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:19:48.781854   37825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:19:48.946842   37825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:19:48.964784   37825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:19:48.985597   37825 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0404 22:19:48.985652   37825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:19:48.985712   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:48.997310   37825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:19:48.997388   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.009512   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.021458   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.033814   37825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:19:49.045574   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.057334   37825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.069274   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.082363   37825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:19:49.093184   37825 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0404 22:19:49.093245   37825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:19:49.103678   37825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:19:49.253323   37825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:19:57.349108   37825 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.095744231s)
	I0404 22:19:57.349147   37825 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:19:57.349207   37825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:19:57.355111   37825 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0404 22:19:57.355140   37825 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0404 22:19:57.355147   37825 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0404 22:19:57.355154   37825 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0404 22:19:57.355158   37825 command_runner.go:130] > Access: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355164   37825 command_runner.go:130] > Modify: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355169   37825 command_runner.go:130] > Change: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355172   37825 command_runner.go:130] >  Birth: -
	I0404 22:19:57.355188   37825 start.go:562] Will wait 60s for crictl version
	I0404 22:19:57.355234   37825 ssh_runner.go:195] Run: which crictl
	I0404 22:19:57.359562   37825 command_runner.go:130] > /usr/bin/crictl
	I0404 22:19:57.359683   37825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:19:57.401287   37825 command_runner.go:130] > Version:  0.1.0
	I0404 22:19:57.401310   37825 command_runner.go:130] > RuntimeName:  cri-o
	I0404 22:19:57.401314   37825 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0404 22:19:57.401320   37825 command_runner.go:130] > RuntimeApiVersion:  v1
	I0404 22:19:57.401396   37825 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:19:57.401485   37825 ssh_runner.go:195] Run: crio --version
	I0404 22:19:57.433835   37825 command_runner.go:130] > crio version 1.29.1
	I0404 22:19:57.433867   37825 command_runner.go:130] > Version:        1.29.1
	I0404 22:19:57.433875   37825 command_runner.go:130] > GitCommit:      unknown
	I0404 22:19:57.433882   37825 command_runner.go:130] > GitCommitDate:  unknown
	I0404 22:19:57.433888   37825 command_runner.go:130] > GitTreeState:   clean
	I0404 22:19:57.433900   37825 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0404 22:19:57.433907   37825 command_runner.go:130] > GoVersion:      go1.21.6
	I0404 22:19:57.433913   37825 command_runner.go:130] > Compiler:       gc
	I0404 22:19:57.433921   37825 command_runner.go:130] > Platform:       linux/amd64
	I0404 22:19:57.433926   37825 command_runner.go:130] > Linkmode:       dynamic
	I0404 22:19:57.433941   37825 command_runner.go:130] > BuildTags:      
	I0404 22:19:57.433949   37825 command_runner.go:130] >   containers_image_ostree_stub
	I0404 22:19:57.433959   37825 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0404 22:19:57.433965   37825 command_runner.go:130] >   btrfs_noversion
	I0404 22:19:57.433973   37825 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0404 22:19:57.433977   37825 command_runner.go:130] >   libdm_no_deferred_remove
	I0404 22:19:57.433981   37825 command_runner.go:130] >   seccomp
	I0404 22:19:57.433985   37825 command_runner.go:130] > LDFlags:          unknown
	I0404 22:19:57.433994   37825 command_runner.go:130] > SeccompEnabled:   true
	I0404 22:19:57.434001   37825 command_runner.go:130] > AppArmorEnabled:  false
	I0404 22:19:57.434063   37825 ssh_runner.go:195] Run: crio --version
	I0404 22:19:57.465975   37825 command_runner.go:130] > crio version 1.29.1
	I0404 22:19:57.465997   37825 command_runner.go:130] > Version:        1.29.1
	I0404 22:19:57.466003   37825 command_runner.go:130] > GitCommit:      unknown
	I0404 22:19:57.466007   37825 command_runner.go:130] > GitCommitDate:  unknown
	I0404 22:19:57.466011   37825 command_runner.go:130] > GitTreeState:   clean
	I0404 22:19:57.466021   37825 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0404 22:19:57.466025   37825 command_runner.go:130] > GoVersion:      go1.21.6
	I0404 22:19:57.466030   37825 command_runner.go:130] > Compiler:       gc
	I0404 22:19:57.466036   37825 command_runner.go:130] > Platform:       linux/amd64
	I0404 22:19:57.466042   37825 command_runner.go:130] > Linkmode:       dynamic
	I0404 22:19:57.466048   37825 command_runner.go:130] > BuildTags:      
	I0404 22:19:57.466054   37825 command_runner.go:130] >   containers_image_ostree_stub
	I0404 22:19:57.466061   37825 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0404 22:19:57.466067   37825 command_runner.go:130] >   btrfs_noversion
	I0404 22:19:57.466081   37825 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0404 22:19:57.466093   37825 command_runner.go:130] >   libdm_no_deferred_remove
	I0404 22:19:57.466099   37825 command_runner.go:130] >   seccomp
	I0404 22:19:57.466105   37825 command_runner.go:130] > LDFlags:          unknown
	I0404 22:19:57.466112   37825 command_runner.go:130] > SeccompEnabled:   true
	I0404 22:19:57.466118   37825 command_runner.go:130] > AppArmorEnabled:  false
	I0404 22:19:57.469337   37825 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:19:57.470824   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:19:57.473887   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:57.474276   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:57.474299   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:57.474521   37825 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:19:57.479273   37825 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0404 22:19:57.479361   37825 kubeadm.go:877] updating cluster {Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:19:57.479487   37825 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:19:57.479550   37825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:19:57.524835   37825 command_runner.go:130] > {
	I0404 22:19:57.524859   37825 command_runner.go:130] >   "images": [
	I0404 22:19:57.524864   37825 command_runner.go:130] >     {
	I0404 22:19:57.524871   37825 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0404 22:19:57.524876   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.524881   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0404 22:19:57.524890   37825 command_runner.go:130] >       ],
	I0404 22:19:57.524896   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.524918   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0404 22:19:57.524933   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0404 22:19:57.524942   37825 command_runner.go:130] >       ],
	I0404 22:19:57.524950   37825 command_runner.go:130] >       "size": "65291810",
	I0404 22:19:57.524960   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.524964   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.524971   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.524975   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.524980   37825 command_runner.go:130] >     },
	I0404 22:19:57.524984   37825 command_runner.go:130] >     {
	I0404 22:19:57.524996   37825 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0404 22:19:57.525004   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525012   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0404 22:19:57.525017   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525023   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525038   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0404 22:19:57.525051   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0404 22:19:57.525069   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525075   37825 command_runner.go:130] >       "size": "1363676",
	I0404 22:19:57.525079   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525089   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525093   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525098   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525102   37825 command_runner.go:130] >     },
	I0404 22:19:57.525108   37825 command_runner.go:130] >     {
	I0404 22:19:57.525114   37825 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0404 22:19:57.525122   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525136   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0404 22:19:57.525146   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525153   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525167   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0404 22:19:57.525177   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0404 22:19:57.525181   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525185   37825 command_runner.go:130] >       "size": "31470524",
	I0404 22:19:57.525189   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525196   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525200   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525208   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525214   37825 command_runner.go:130] >     },
	I0404 22:19:57.525224   37825 command_runner.go:130] >     {
	I0404 22:19:57.525235   37825 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0404 22:19:57.525244   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525255   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0404 22:19:57.525264   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525274   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525288   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0404 22:19:57.525305   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0404 22:19:57.525314   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525328   37825 command_runner.go:130] >       "size": "61245718",
	I0404 22:19:57.525336   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525342   37825 command_runner.go:130] >       "username": "nonroot",
	I0404 22:19:57.525349   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525359   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525378   37825 command_runner.go:130] >     },
	I0404 22:19:57.525387   37825 command_runner.go:130] >     {
	I0404 22:19:57.525401   37825 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0404 22:19:57.525410   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525419   37825 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0404 22:19:57.525428   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525438   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525468   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0404 22:19:57.525483   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0404 22:19:57.525492   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525502   37825 command_runner.go:130] >       "size": "150779692",
	I0404 22:19:57.525510   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525520   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525526   37825 command_runner.go:130] >       },
	I0404 22:19:57.525533   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525539   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525549   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525557   37825 command_runner.go:130] >     },
	I0404 22:19:57.525562   37825 command_runner.go:130] >     {
	I0404 22:19:57.525576   37825 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0404 22:19:57.525586   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525597   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0404 22:19:57.525605   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525614   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525628   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0404 22:19:57.525639   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0404 22:19:57.525649   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525660   37825 command_runner.go:130] >       "size": "128508878",
	I0404 22:19:57.525666   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525676   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525685   37825 command_runner.go:130] >       },
	I0404 22:19:57.525694   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525708   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525718   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525726   37825 command_runner.go:130] >     },
	I0404 22:19:57.525729   37825 command_runner.go:130] >     {
	I0404 22:19:57.525747   37825 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0404 22:19:57.525781   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525790   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0404 22:19:57.525801   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525811   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525826   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0404 22:19:57.525842   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0404 22:19:57.525851   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525859   37825 command_runner.go:130] >       "size": "123142962",
	I0404 22:19:57.525866   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525873   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525882   37825 command_runner.go:130] >       },
	I0404 22:19:57.525892   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525901   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525912   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525920   37825 command_runner.go:130] >     },
	I0404 22:19:57.525929   37825 command_runner.go:130] >     {
	I0404 22:19:57.525942   37825 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0404 22:19:57.525949   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525956   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0404 22:19:57.525966   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525976   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526005   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0404 22:19:57.526020   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0404 22:19:57.526029   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526039   37825 command_runner.go:130] >       "size": "83634073",
	I0404 22:19:57.526047   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.526062   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526069   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526076   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.526081   37825 command_runner.go:130] >     },
	I0404 22:19:57.526087   37825 command_runner.go:130] >     {
	I0404 22:19:57.526097   37825 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0404 22:19:57.526106   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.526117   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0404 22:19:57.526126   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526137   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526153   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0404 22:19:57.526168   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0404 22:19:57.526178   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526188   37825 command_runner.go:130] >       "size": "60724018",
	I0404 22:19:57.526197   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.526206   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.526215   37825 command_runner.go:130] >       },
	I0404 22:19:57.526229   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526235   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526240   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.526248   37825 command_runner.go:130] >     },
	I0404 22:19:57.526255   37825 command_runner.go:130] >     {
	I0404 22:19:57.526269   37825 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0404 22:19:57.526279   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.526293   37825 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0404 22:19:57.526302   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526312   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526326   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0404 22:19:57.526342   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0404 22:19:57.526350   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526356   37825 command_runner.go:130] >       "size": "750414",
	I0404 22:19:57.526364   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.526375   37825 command_runner.go:130] >         "value": "65535"
	I0404 22:19:57.526384   37825 command_runner.go:130] >       },
	I0404 22:19:57.526391   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526402   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526412   37825 command_runner.go:130] >       "pinned": true
	I0404 22:19:57.526419   37825 command_runner.go:130] >     }
	I0404 22:19:57.526427   37825 command_runner.go:130] >   ]
	I0404 22:19:57.526433   37825 command_runner.go:130] > }
	I0404 22:19:57.526635   37825 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:19:57.526649   37825 crio.go:433] Images already preloaded, skipping extraction
	I0404 22:19:57.526706   37825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:19:57.565347   37825 command_runner.go:130] > {
	I0404 22:19:57.565376   37825 command_runner.go:130] >   "images": [
	I0404 22:19:57.565380   37825 command_runner.go:130] >     {
	I0404 22:19:57.565388   37825 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0404 22:19:57.565393   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565402   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0404 22:19:57.565406   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565410   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565427   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0404 22:19:57.565437   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0404 22:19:57.565443   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565463   37825 command_runner.go:130] >       "size": "65291810",
	I0404 22:19:57.565473   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565478   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565488   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565495   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565503   37825 command_runner.go:130] >     },
	I0404 22:19:57.565507   37825 command_runner.go:130] >     {
	I0404 22:19:57.565516   37825 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0404 22:19:57.565520   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565528   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0404 22:19:57.565531   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565535   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565543   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0404 22:19:57.565550   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0404 22:19:57.565557   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565563   37825 command_runner.go:130] >       "size": "1363676",
	I0404 22:19:57.565573   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565589   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565599   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565609   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565616   37825 command_runner.go:130] >     },
	I0404 22:19:57.565619   37825 command_runner.go:130] >     {
	I0404 22:19:57.565631   37825 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0404 22:19:57.565637   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565643   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0404 22:19:57.565648   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565660   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565675   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0404 22:19:57.565693   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0404 22:19:57.565702   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565712   37825 command_runner.go:130] >       "size": "31470524",
	I0404 22:19:57.565721   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565729   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565733   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565739   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565742   37825 command_runner.go:130] >     },
	I0404 22:19:57.565746   37825 command_runner.go:130] >     {
	I0404 22:19:57.565751   37825 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0404 22:19:57.565756   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565761   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0404 22:19:57.565770   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565775   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565791   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0404 22:19:57.565852   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0404 22:19:57.565864   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565868   37825 command_runner.go:130] >       "size": "61245718",
	I0404 22:19:57.565872   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565876   37825 command_runner.go:130] >       "username": "nonroot",
	I0404 22:19:57.565883   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565889   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565897   37825 command_runner.go:130] >     },
	I0404 22:19:57.565906   37825 command_runner.go:130] >     {
	I0404 22:19:57.565930   37825 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0404 22:19:57.565940   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565950   37825 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0404 22:19:57.565959   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565969   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565981   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0404 22:19:57.565994   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0404 22:19:57.566003   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566010   37825 command_runner.go:130] >       "size": "150779692",
	I0404 22:19:57.566019   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566035   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566044   37825 command_runner.go:130] >       },
	I0404 22:19:57.566054   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566062   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566071   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566078   37825 command_runner.go:130] >     },
	I0404 22:19:57.566082   37825 command_runner.go:130] >     {
	I0404 22:19:57.566094   37825 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0404 22:19:57.566104   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566117   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0404 22:19:57.566126   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566135   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566150   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0404 22:19:57.566165   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0404 22:19:57.566171   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566176   37825 command_runner.go:130] >       "size": "128508878",
	I0404 22:19:57.566185   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566195   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566201   37825 command_runner.go:130] >       },
	I0404 22:19:57.566207   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566217   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566226   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566234   37825 command_runner.go:130] >     },
	I0404 22:19:57.566242   37825 command_runner.go:130] >     {
	I0404 22:19:57.566253   37825 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0404 22:19:57.566263   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566272   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0404 22:19:57.566280   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566290   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566306   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0404 22:19:57.566322   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0404 22:19:57.566331   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566341   37825 command_runner.go:130] >       "size": "123142962",
	I0404 22:19:57.566350   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566359   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566367   37825 command_runner.go:130] >       },
	I0404 22:19:57.566385   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566394   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566404   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566413   37825 command_runner.go:130] >     },
	I0404 22:19:57.566421   37825 command_runner.go:130] >     {
	I0404 22:19:57.566431   37825 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0404 22:19:57.566439   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566451   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0404 22:19:57.566459   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566469   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566503   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0404 22:19:57.566518   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0404 22:19:57.566522   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566527   37825 command_runner.go:130] >       "size": "83634073",
	I0404 22:19:57.566533   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.566539   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566546   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566554   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566560   37825 command_runner.go:130] >     },
	I0404 22:19:57.566565   37825 command_runner.go:130] >     {
	I0404 22:19:57.566577   37825 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0404 22:19:57.566587   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566595   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0404 22:19:57.566602   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566609   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566623   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0404 22:19:57.566636   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0404 22:19:57.566642   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566649   37825 command_runner.go:130] >       "size": "60724018",
	I0404 22:19:57.566656   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566663   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566669   37825 command_runner.go:130] >       },
	I0404 22:19:57.566680   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566687   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566695   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566704   37825 command_runner.go:130] >     },
	I0404 22:19:57.566717   37825 command_runner.go:130] >     {
	I0404 22:19:57.566730   37825 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0404 22:19:57.566739   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566748   37825 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0404 22:19:57.566757   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566765   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566780   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0404 22:19:57.566796   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0404 22:19:57.566812   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566823   37825 command_runner.go:130] >       "size": "750414",
	I0404 22:19:57.566833   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566840   37825 command_runner.go:130] >         "value": "65535"
	I0404 22:19:57.566849   37825 command_runner.go:130] >       },
	I0404 22:19:57.566856   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566866   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566874   37825 command_runner.go:130] >       "pinned": true
	I0404 22:19:57.566882   37825 command_runner.go:130] >     }
	I0404 22:19:57.566887   37825 command_runner.go:130] >   ]
	I0404 22:19:57.566895   37825 command_runner.go:130] > }
	I0404 22:19:57.567039   37825 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:19:57.567052   37825 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:19:57.567061   37825 kubeadm.go:928] updating node { 192.168.39.203 8443 v1.29.3 crio true true} ...
	I0404 22:19:57.567172   37825 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-575162 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:19:57.567251   37825 ssh_runner.go:195] Run: crio config
	I0404 22:19:57.617542   37825 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0404 22:19:57.617572   37825 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0404 22:19:57.617581   37825 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0404 22:19:57.617586   37825 command_runner.go:130] > #
	I0404 22:19:57.617610   37825 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0404 22:19:57.617619   37825 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0404 22:19:57.617627   37825 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0404 22:19:57.617636   37825 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0404 22:19:57.617641   37825 command_runner.go:130] > # reload'.
	I0404 22:19:57.617656   37825 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0404 22:19:57.617666   37825 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0404 22:19:57.617679   37825 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0404 22:19:57.617690   37825 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0404 22:19:57.617697   37825 command_runner.go:130] > [crio]
	I0404 22:19:57.617708   37825 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0404 22:19:57.617720   37825 command_runner.go:130] > # containers images, in this directory.
	I0404 22:19:57.617743   37825 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0404 22:19:57.617793   37825 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0404 22:19:57.617890   37825 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0404 22:19:57.617916   37825 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0404 22:19:57.618074   37825 command_runner.go:130] > # imagestore = ""
	I0404 22:19:57.618111   37825 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0404 22:19:57.618126   37825 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0404 22:19:57.618262   37825 command_runner.go:130] > storage_driver = "overlay"
	I0404 22:19:57.618280   37825 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0404 22:19:57.618290   37825 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0404 22:19:57.618305   37825 command_runner.go:130] > storage_option = [
	I0404 22:19:57.618469   37825 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0404 22:19:57.618503   37825 command_runner.go:130] > ]
	I0404 22:19:57.618518   37825 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0404 22:19:57.618531   37825 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0404 22:19:57.618864   37825 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0404 22:19:57.618882   37825 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0404 22:19:57.618893   37825 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0404 22:19:57.618901   37825 command_runner.go:130] > # always happen on a node reboot
	I0404 22:19:57.619105   37825 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0404 22:19:57.619131   37825 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0404 22:19:57.619144   37825 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0404 22:19:57.619155   37825 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0404 22:19:57.619260   37825 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0404 22:19:57.619276   37825 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0404 22:19:57.619289   37825 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0404 22:19:57.619466   37825 command_runner.go:130] > # internal_wipe = true
	I0404 22:19:57.619487   37825 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0404 22:19:57.619496   37825 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0404 22:19:57.619768   37825 command_runner.go:130] > # internal_repair = false
	I0404 22:19:57.619786   37825 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0404 22:19:57.619796   37825 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0404 22:19:57.619804   37825 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0404 22:19:57.620097   37825 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0404 22:19:57.620129   37825 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0404 22:19:57.620136   37825 command_runner.go:130] > [crio.api]
	I0404 22:19:57.620144   37825 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0404 22:19:57.620406   37825 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0404 22:19:57.620424   37825 command_runner.go:130] > # IP address on which the stream server will listen.
	I0404 22:19:57.620857   37825 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0404 22:19:57.620882   37825 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0404 22:19:57.620891   37825 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0404 22:19:57.621105   37825 command_runner.go:130] > # stream_port = "0"
	I0404 22:19:57.621124   37825 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0404 22:19:57.621318   37825 command_runner.go:130] > # stream_enable_tls = false
	I0404 22:19:57.621338   37825 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0404 22:19:57.621542   37825 command_runner.go:130] > # stream_idle_timeout = ""
	I0404 22:19:57.621558   37825 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0404 22:19:57.621568   37825 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0404 22:19:57.621577   37825 command_runner.go:130] > # minutes.
	I0404 22:19:57.621855   37825 command_runner.go:130] > # stream_tls_cert = ""
	I0404 22:19:57.621870   37825 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0404 22:19:57.621881   37825 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0404 22:19:57.621991   37825 command_runner.go:130] > # stream_tls_key = ""
	I0404 22:19:57.622006   37825 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0404 22:19:57.622016   37825 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0404 22:19:57.622040   37825 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0404 22:19:57.622051   37825 command_runner.go:130] > # stream_tls_ca = ""
	I0404 22:19:57.622062   37825 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0404 22:19:57.622070   37825 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0404 22:19:57.622086   37825 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0404 22:19:57.622104   37825 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0404 22:19:57.622115   37825 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0404 22:19:57.622128   37825 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0404 22:19:57.622137   37825 command_runner.go:130] > [crio.runtime]
	I0404 22:19:57.622147   37825 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0404 22:19:57.622160   37825 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0404 22:19:57.622168   37825 command_runner.go:130] > # "nofile=1024:2048"
	I0404 22:19:57.622176   37825 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0404 22:19:57.622184   37825 command_runner.go:130] > # default_ulimits = [
	I0404 22:19:57.622190   37825 command_runner.go:130] > # ]
	I0404 22:19:57.622202   37825 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0404 22:19:57.622214   37825 command_runner.go:130] > # no_pivot = false
	I0404 22:19:57.622225   37825 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0404 22:19:57.622238   37825 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0404 22:19:57.622249   37825 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0404 22:19:57.622260   37825 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0404 22:19:57.622265   37825 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0404 22:19:57.622280   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0404 22:19:57.622292   37825 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0404 22:19:57.622300   37825 command_runner.go:130] > # Cgroup setting for conmon
	I0404 22:19:57.622312   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0404 22:19:57.622323   37825 command_runner.go:130] > conmon_cgroup = "pod"
	I0404 22:19:57.622333   37825 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0404 22:19:57.622344   37825 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0404 22:19:57.622361   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0404 22:19:57.622369   37825 command_runner.go:130] > conmon_env = [
	I0404 22:19:57.622379   37825 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0404 22:19:57.622388   37825 command_runner.go:130] > ]
	I0404 22:19:57.622397   37825 command_runner.go:130] > # Additional environment variables to set for all the
	I0404 22:19:57.622409   37825 command_runner.go:130] > # containers. These are overridden if set in the
	I0404 22:19:57.622421   37825 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0404 22:19:57.622428   37825 command_runner.go:130] > # default_env = [
	I0404 22:19:57.622434   37825 command_runner.go:130] > # ]
	I0404 22:19:57.622446   37825 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0404 22:19:57.622461   37825 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0404 22:19:57.622471   37825 command_runner.go:130] > # selinux = false
	I0404 22:19:57.622499   37825 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0404 22:19:57.622518   37825 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0404 22:19:57.622528   37825 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0404 22:19:57.622538   37825 command_runner.go:130] > # seccomp_profile = ""
	I0404 22:19:57.622548   37825 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0404 22:19:57.622561   37825 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0404 22:19:57.622579   37825 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0404 22:19:57.622591   37825 command_runner.go:130] > # which might increase security.
	I0404 22:19:57.622602   37825 command_runner.go:130] > # This option is currently deprecated,
	I0404 22:19:57.622611   37825 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0404 22:19:57.622621   37825 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0404 22:19:57.622628   37825 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0404 22:19:57.622640   37825 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0404 22:19:57.622654   37825 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0404 22:19:57.622665   37825 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0404 22:19:57.622677   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.622688   37825 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0404 22:19:57.622699   37825 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0404 22:19:57.622710   37825 command_runner.go:130] > # the cgroup blockio controller.
	I0404 22:19:57.622720   37825 command_runner.go:130] > # blockio_config_file = ""
	I0404 22:19:57.622731   37825 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0404 22:19:57.622741   37825 command_runner.go:130] > # blockio parameters.
	I0404 22:19:57.622748   37825 command_runner.go:130] > # blockio_reload = false
	I0404 22:19:57.622761   37825 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0404 22:19:57.622771   37825 command_runner.go:130] > # irqbalance daemon.
	I0404 22:19:57.622781   37825 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0404 22:19:57.622795   37825 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0404 22:19:57.622809   37825 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0404 22:19:57.622823   37825 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0404 22:19:57.622860   37825 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0404 22:19:57.622875   37825 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0404 22:19:57.622890   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.622899   37825 command_runner.go:130] > # rdt_config_file = ""
	I0404 22:19:57.622905   37825 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0404 22:19:57.622910   37825 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0404 22:19:57.622946   37825 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0404 22:19:57.622962   37825 command_runner.go:130] > # separate_pull_cgroup = ""
	I0404 22:19:57.622975   37825 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0404 22:19:57.622989   37825 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0404 22:19:57.622998   37825 command_runner.go:130] > # will be added.
	I0404 22:19:57.623005   37825 command_runner.go:130] > # default_capabilities = [
	I0404 22:19:57.623015   37825 command_runner.go:130] > # 	"CHOWN",
	I0404 22:19:57.623021   37825 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0404 22:19:57.623030   37825 command_runner.go:130] > # 	"FSETID",
	I0404 22:19:57.623035   37825 command_runner.go:130] > # 	"FOWNER",
	I0404 22:19:57.623045   37825 command_runner.go:130] > # 	"SETGID",
	I0404 22:19:57.623050   37825 command_runner.go:130] > # 	"SETUID",
	I0404 22:19:57.623060   37825 command_runner.go:130] > # 	"SETPCAP",
	I0404 22:19:57.623067   37825 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0404 22:19:57.623076   37825 command_runner.go:130] > # 	"KILL",
	I0404 22:19:57.623084   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623097   37825 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0404 22:19:57.623111   37825 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0404 22:19:57.623121   37825 command_runner.go:130] > # add_inheritable_capabilities = false
	I0404 22:19:57.623133   37825 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0404 22:19:57.623143   37825 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0404 22:19:57.623152   37825 command_runner.go:130] > default_sysctls = [
	I0404 22:19:57.623159   37825 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0404 22:19:57.623169   37825 command_runner.go:130] > ]
	I0404 22:19:57.623177   37825 command_runner.go:130] > # List of devices on the host that a
	I0404 22:19:57.623188   37825 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0404 22:19:57.623197   37825 command_runner.go:130] > # allowed_devices = [
	I0404 22:19:57.623203   37825 command_runner.go:130] > # 	"/dev/fuse",
	I0404 22:19:57.623211   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623218   37825 command_runner.go:130] > # List of additional devices. specified as
	I0404 22:19:57.623229   37825 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0404 22:19:57.623239   37825 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0404 22:19:57.623252   37825 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0404 22:19:57.623262   37825 command_runner.go:130] > # additional_devices = [
	I0404 22:19:57.623268   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623279   37825 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0404 22:19:57.623286   37825 command_runner.go:130] > # cdi_spec_dirs = [
	I0404 22:19:57.623301   37825 command_runner.go:130] > # 	"/etc/cdi",
	I0404 22:19:57.623309   37825 command_runner.go:130] > # 	"/var/run/cdi",
	I0404 22:19:57.623312   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623321   37825 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0404 22:19:57.623334   37825 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0404 22:19:57.623344   37825 command_runner.go:130] > # Defaults to false.
	I0404 22:19:57.623352   37825 command_runner.go:130] > # device_ownership_from_security_context = false
	I0404 22:19:57.623364   37825 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0404 22:19:57.623377   37825 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0404 22:19:57.623386   37825 command_runner.go:130] > # hooks_dir = [
	I0404 22:19:57.623398   37825 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0404 22:19:57.623407   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623417   37825 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0404 22:19:57.623431   37825 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0404 22:19:57.623442   37825 command_runner.go:130] > # its default mounts from the following two files:
	I0404 22:19:57.623447   37825 command_runner.go:130] > #
	I0404 22:19:57.623460   37825 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0404 22:19:57.623473   37825 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0404 22:19:57.623483   37825 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0404 22:19:57.623487   37825 command_runner.go:130] > #
	I0404 22:19:57.623495   37825 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0404 22:19:57.623508   37825 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0404 22:19:57.623522   37825 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0404 22:19:57.623532   37825 command_runner.go:130] > #      only add mounts it finds in this file.
	I0404 22:19:57.623539   37825 command_runner.go:130] > #
	I0404 22:19:57.623546   37825 command_runner.go:130] > # default_mounts_file = ""
	I0404 22:19:57.623566   37825 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0404 22:19:57.623575   37825 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0404 22:19:57.623581   37825 command_runner.go:130] > pids_limit = 1024
	I0404 22:19:57.623594   37825 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0404 22:19:57.623608   37825 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0404 22:19:57.623621   37825 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0404 22:19:57.623636   37825 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0404 22:19:57.623645   37825 command_runner.go:130] > # log_size_max = -1
	I0404 22:19:57.623655   37825 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0404 22:19:57.623661   37825 command_runner.go:130] > # log_to_journald = false
	I0404 22:19:57.623677   37825 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0404 22:19:57.623690   37825 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0404 22:19:57.623701   37825 command_runner.go:130] > # Path to directory for container attach sockets.
	I0404 22:19:57.623710   37825 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0404 22:19:57.623721   37825 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0404 22:19:57.623732   37825 command_runner.go:130] > # bind_mount_prefix = ""
	I0404 22:19:57.623742   37825 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0404 22:19:57.623749   37825 command_runner.go:130] > # read_only = false
	I0404 22:19:57.623759   37825 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0404 22:19:57.623771   37825 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0404 22:19:57.623782   37825 command_runner.go:130] > # live configuration reload.
	I0404 22:19:57.623791   37825 command_runner.go:130] > # log_level = "info"
	I0404 22:19:57.623799   37825 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0404 22:19:57.623810   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.623819   37825 command_runner.go:130] > # log_filter = ""
	I0404 22:19:57.623828   37825 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0404 22:19:57.623842   37825 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0404 22:19:57.623852   37825 command_runner.go:130] > # separated by comma.
	I0404 22:19:57.623868   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623877   37825 command_runner.go:130] > # uid_mappings = ""
	I0404 22:19:57.623887   37825 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0404 22:19:57.623900   37825 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0404 22:19:57.623909   37825 command_runner.go:130] > # separated by comma.
	I0404 22:19:57.623919   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623927   37825 command_runner.go:130] > # gid_mappings = ""
	I0404 22:19:57.623936   37825 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0404 22:19:57.623951   37825 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0404 22:19:57.623964   37825 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0404 22:19:57.623978   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623984   37825 command_runner.go:130] > # minimum_mappable_uid = -1
	I0404 22:19:57.623998   37825 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0404 22:19:57.624011   37825 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0404 22:19:57.624024   37825 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0404 22:19:57.624039   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.624046   37825 command_runner.go:130] > # minimum_mappable_gid = -1
	I0404 22:19:57.624059   37825 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0404 22:19:57.624078   37825 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0404 22:19:57.624090   37825 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0404 22:19:57.624100   37825 command_runner.go:130] > # ctr_stop_timeout = 30
	I0404 22:19:57.624114   37825 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0404 22:19:57.624133   37825 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0404 22:19:57.624145   37825 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0404 22:19:57.624155   37825 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0404 22:19:57.624164   37825 command_runner.go:130] > drop_infra_ctr = false
	I0404 22:19:57.624174   37825 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0404 22:19:57.624187   37825 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0404 22:19:57.624204   37825 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0404 22:19:57.624214   37825 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0404 22:19:57.624226   37825 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0404 22:19:57.624238   37825 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0404 22:19:57.624249   37825 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0404 22:19:57.624257   37825 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0404 22:19:57.624263   37825 command_runner.go:130] > # shared_cpuset = ""
	I0404 22:19:57.624276   37825 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0404 22:19:57.624288   37825 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0404 22:19:57.624297   37825 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0404 22:19:57.624309   37825 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0404 22:19:57.624318   37825 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0404 22:19:57.624328   37825 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0404 22:19:57.624340   37825 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0404 22:19:57.624411   37825 command_runner.go:130] > # enable_criu_support = false
	I0404 22:19:57.624433   37825 command_runner.go:130] > # Enable/disable the generation of the container,
	I0404 22:19:57.624444   37825 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0404 22:19:57.624451   37825 command_runner.go:130] > # enable_pod_events = false
	I0404 22:19:57.624462   37825 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0404 22:19:57.624476   37825 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0404 22:19:57.624488   37825 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0404 22:19:57.624498   37825 command_runner.go:130] > # default_runtime = "runc"
	I0404 22:19:57.624508   37825 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0404 22:19:57.624524   37825 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0404 22:19:57.624541   37825 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0404 22:19:57.624553   37825 command_runner.go:130] > # creation as a file is not desired either.
	I0404 22:19:57.624579   37825 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0404 22:19:57.624590   37825 command_runner.go:130] > # the hostname is being managed dynamically.
	I0404 22:19:57.624600   37825 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0404 22:19:57.624609   37825 command_runner.go:130] > # ]
	I0404 22:19:57.624619   37825 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0404 22:19:57.624633   37825 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0404 22:19:57.624646   37825 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0404 22:19:57.624658   37825 command_runner.go:130] > # Each entry in the table should follow the format:
	I0404 22:19:57.624663   37825 command_runner.go:130] > #
	I0404 22:19:57.624671   37825 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0404 22:19:57.624683   37825 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0404 22:19:57.624739   37825 command_runner.go:130] > # runtime_type = "oci"
	I0404 22:19:57.624773   37825 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0404 22:19:57.624782   37825 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0404 22:19:57.624786   37825 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0404 22:19:57.624792   37825 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0404 22:19:57.624796   37825 command_runner.go:130] > # monitor_env = []
	I0404 22:19:57.624801   37825 command_runner.go:130] > # privileged_without_host_devices = false
	I0404 22:19:57.624808   37825 command_runner.go:130] > # allowed_annotations = []
	I0404 22:19:57.624813   37825 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0404 22:19:57.624819   37825 command_runner.go:130] > # Where:
	I0404 22:19:57.624824   37825 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0404 22:19:57.624832   37825 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0404 22:19:57.624840   37825 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0404 22:19:57.624846   37825 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0404 22:19:57.624850   37825 command_runner.go:130] > #   in $PATH.
	I0404 22:19:57.624857   37825 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0404 22:19:57.624864   37825 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0404 22:19:57.624870   37825 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0404 22:19:57.624877   37825 command_runner.go:130] > #   state.
	I0404 22:19:57.624883   37825 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0404 22:19:57.624891   37825 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0404 22:19:57.624897   37825 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0404 22:19:57.624904   37825 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0404 22:19:57.624910   37825 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0404 22:19:57.624918   37825 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0404 22:19:57.624928   37825 command_runner.go:130] > #   The currently recognized values are:
	I0404 22:19:57.624937   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0404 22:19:57.624944   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0404 22:19:57.624951   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0404 22:19:57.624956   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0404 22:19:57.624966   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0404 22:19:57.624972   37825 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0404 22:19:57.624982   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0404 22:19:57.624990   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0404 22:19:57.624996   37825 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0404 22:19:57.625002   37825 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0404 22:19:57.625006   37825 command_runner.go:130] > #   deprecated option "conmon".
	I0404 22:19:57.625017   37825 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0404 22:19:57.625024   37825 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0404 22:19:57.625030   37825 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0404 22:19:57.625038   37825 command_runner.go:130] > #   should be moved to the container's cgroup
	I0404 22:19:57.625044   37825 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0404 22:19:57.625049   37825 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0404 22:19:57.625057   37825 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0404 22:19:57.625062   37825 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0404 22:19:57.625067   37825 command_runner.go:130] > #
	I0404 22:19:57.625072   37825 command_runner.go:130] > # Using the seccomp notifier feature:
	I0404 22:19:57.625075   37825 command_runner.go:130] > #
	I0404 22:19:57.625080   37825 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0404 22:19:57.625090   37825 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0404 22:19:57.625093   37825 command_runner.go:130] > #
	I0404 22:19:57.625101   37825 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0404 22:19:57.625109   37825 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0404 22:19:57.625112   37825 command_runner.go:130] > #
	I0404 22:19:57.625118   37825 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0404 22:19:57.625124   37825 command_runner.go:130] > # feature.
	I0404 22:19:57.625127   37825 command_runner.go:130] > #
	I0404 22:19:57.625132   37825 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0404 22:19:57.625138   37825 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0404 22:19:57.625146   37825 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0404 22:19:57.625152   37825 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0404 22:19:57.625166   37825 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0404 22:19:57.625172   37825 command_runner.go:130] > #
	I0404 22:19:57.625177   37825 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0404 22:19:57.625191   37825 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0404 22:19:57.625197   37825 command_runner.go:130] > #
	I0404 22:19:57.625203   37825 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0404 22:19:57.625210   37825 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0404 22:19:57.625214   37825 command_runner.go:130] > #
	I0404 22:19:57.625219   37825 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0404 22:19:57.625227   37825 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0404 22:19:57.625230   37825 command_runner.go:130] > # limitation.
	I0404 22:19:57.625234   37825 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0404 22:19:57.625241   37825 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0404 22:19:57.625245   37825 command_runner.go:130] > runtime_type = "oci"
	I0404 22:19:57.625249   37825 command_runner.go:130] > runtime_root = "/run/runc"
	I0404 22:19:57.625253   37825 command_runner.go:130] > runtime_config_path = ""
	I0404 22:19:57.625257   37825 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0404 22:19:57.625261   37825 command_runner.go:130] > monitor_cgroup = "pod"
	I0404 22:19:57.625265   37825 command_runner.go:130] > monitor_exec_cgroup = ""
	I0404 22:19:57.625269   37825 command_runner.go:130] > monitor_env = [
	I0404 22:19:57.625274   37825 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0404 22:19:57.625279   37825 command_runner.go:130] > ]
	I0404 22:19:57.625284   37825 command_runner.go:130] > privileged_without_host_devices = false
	I0404 22:19:57.625293   37825 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0404 22:19:57.625298   37825 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0404 22:19:57.625304   37825 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0404 22:19:57.625311   37825 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0404 22:19:57.625321   37825 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0404 22:19:57.625326   37825 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0404 22:19:57.625342   37825 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0404 22:19:57.625352   37825 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0404 22:19:57.625357   37825 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0404 22:19:57.625367   37825 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0404 22:19:57.625370   37825 command_runner.go:130] > # Example:
	I0404 22:19:57.625375   37825 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0404 22:19:57.625381   37825 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0404 22:19:57.625389   37825 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0404 22:19:57.625402   37825 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0404 22:19:57.625406   37825 command_runner.go:130] > # cpuset = 0
	I0404 22:19:57.625410   37825 command_runner.go:130] > # cpushares = "0-1"
	I0404 22:19:57.625413   37825 command_runner.go:130] > # Where:
	I0404 22:19:57.625417   37825 command_runner.go:130] > # The workload name is workload-type.
	I0404 22:19:57.625427   37825 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0404 22:19:57.625432   37825 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0404 22:19:57.625438   37825 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0404 22:19:57.625447   37825 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0404 22:19:57.625468   37825 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0404 22:19:57.625473   37825 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0404 22:19:57.625479   37825 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0404 22:19:57.625486   37825 command_runner.go:130] > # Default value is set to true
	I0404 22:19:57.625490   37825 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0404 22:19:57.625498   37825 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0404 22:19:57.625503   37825 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0404 22:19:57.625509   37825 command_runner.go:130] > # Default value is set to 'false'
	I0404 22:19:57.625514   37825 command_runner.go:130] > # disable_hostport_mapping = false
	I0404 22:19:57.625523   37825 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0404 22:19:57.625526   37825 command_runner.go:130] > #
	I0404 22:19:57.625532   37825 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0404 22:19:57.625540   37825 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0404 22:19:57.625546   37825 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0404 22:19:57.625552   37825 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0404 22:19:57.625557   37825 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0404 22:19:57.625561   37825 command_runner.go:130] > [crio.image]
	I0404 22:19:57.625566   37825 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0404 22:19:57.625570   37825 command_runner.go:130] > # default_transport = "docker://"
	I0404 22:19:57.625576   37825 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0404 22:19:57.625581   37825 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0404 22:19:57.625585   37825 command_runner.go:130] > # global_auth_file = ""
	I0404 22:19:57.625590   37825 command_runner.go:130] > # The image used to instantiate infra containers.
	I0404 22:19:57.625594   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.625599   37825 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0404 22:19:57.625605   37825 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0404 22:19:57.625617   37825 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0404 22:19:57.625624   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.625631   37825 command_runner.go:130] > # pause_image_auth_file = ""
	I0404 22:19:57.625641   37825 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0404 22:19:57.625651   37825 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0404 22:19:57.625657   37825 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0404 22:19:57.625665   37825 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0404 22:19:57.625669   37825 command_runner.go:130] > # pause_command = "/pause"
	I0404 22:19:57.625677   37825 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0404 22:19:57.625683   37825 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0404 22:19:57.625691   37825 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0404 22:19:57.625699   37825 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0404 22:19:57.625712   37825 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0404 22:19:57.625725   37825 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0404 22:19:57.625735   37825 command_runner.go:130] > # pinned_images = [
	I0404 22:19:57.625739   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625745   37825 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0404 22:19:57.625757   37825 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0404 22:19:57.625766   37825 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0404 22:19:57.625772   37825 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0404 22:19:57.625779   37825 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0404 22:19:57.625783   37825 command_runner.go:130] > # signature_policy = ""
	I0404 22:19:57.625791   37825 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0404 22:19:57.625797   37825 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0404 22:19:57.625805   37825 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0404 22:19:57.625811   37825 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0404 22:19:57.625825   37825 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0404 22:19:57.625835   37825 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0404 22:19:57.625845   37825 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0404 22:19:57.625858   37825 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0404 22:19:57.625868   37825 command_runner.go:130] > # changing them here.
	I0404 22:19:57.625876   37825 command_runner.go:130] > # insecure_registries = [
	I0404 22:19:57.625883   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625891   37825 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0404 22:19:57.625897   37825 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0404 22:19:57.625902   37825 command_runner.go:130] > # image_volumes = "mkdir"
	I0404 22:19:57.625912   37825 command_runner.go:130] > # Temporary directory to use for storing big files
	I0404 22:19:57.625918   37825 command_runner.go:130] > # big_files_temporary_dir = ""
	I0404 22:19:57.625924   37825 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0404 22:19:57.625928   37825 command_runner.go:130] > # CNI plugins.
	I0404 22:19:57.625932   37825 command_runner.go:130] > [crio.network]
	I0404 22:19:57.625937   37825 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0404 22:19:57.625943   37825 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0404 22:19:57.625948   37825 command_runner.go:130] > # cni_default_network = ""
	I0404 22:19:57.625955   37825 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0404 22:19:57.625959   37825 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0404 22:19:57.625967   37825 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0404 22:19:57.625973   37825 command_runner.go:130] > # plugin_dirs = [
	I0404 22:19:57.625979   37825 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0404 22:19:57.625982   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625988   37825 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0404 22:19:57.625993   37825 command_runner.go:130] > [crio.metrics]
	I0404 22:19:57.625998   37825 command_runner.go:130] > # Globally enable or disable metrics support.
	I0404 22:19:57.626002   37825 command_runner.go:130] > enable_metrics = true
	I0404 22:19:57.626007   37825 command_runner.go:130] > # Specify enabled metrics collectors.
	I0404 22:19:57.626012   37825 command_runner.go:130] > # Per default all metrics are enabled.
	I0404 22:19:57.626018   37825 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0404 22:19:57.626024   37825 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0404 22:19:57.626033   37825 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0404 22:19:57.626037   37825 command_runner.go:130] > # metrics_collectors = [
	I0404 22:19:57.626045   37825 command_runner.go:130] > # 	"operations",
	I0404 22:19:57.626049   37825 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0404 22:19:57.626056   37825 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0404 22:19:57.626061   37825 command_runner.go:130] > # 	"operations_errors",
	I0404 22:19:57.626067   37825 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0404 22:19:57.626072   37825 command_runner.go:130] > # 	"image_pulls_by_name",
	I0404 22:19:57.626077   37825 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0404 22:19:57.626081   37825 command_runner.go:130] > # 	"image_pulls_failures",
	I0404 22:19:57.626088   37825 command_runner.go:130] > # 	"image_pulls_successes",
	I0404 22:19:57.626091   37825 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0404 22:19:57.626098   37825 command_runner.go:130] > # 	"image_layer_reuse",
	I0404 22:19:57.626105   37825 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0404 22:19:57.626153   37825 command_runner.go:130] > # 	"containers_oom_total",
	I0404 22:19:57.626163   37825 command_runner.go:130] > # 	"containers_oom",
	I0404 22:19:57.626167   37825 command_runner.go:130] > # 	"processes_defunct",
	I0404 22:19:57.626171   37825 command_runner.go:130] > # 	"operations_total",
	I0404 22:19:57.626176   37825 command_runner.go:130] > # 	"operations_latency_seconds",
	I0404 22:19:57.626182   37825 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0404 22:19:57.626187   37825 command_runner.go:130] > # 	"operations_errors_total",
	I0404 22:19:57.626193   37825 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0404 22:19:57.626198   37825 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0404 22:19:57.626204   37825 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0404 22:19:57.626208   37825 command_runner.go:130] > # 	"image_pulls_success_total",
	I0404 22:19:57.626218   37825 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0404 22:19:57.626222   37825 command_runner.go:130] > # 	"containers_oom_count_total",
	I0404 22:19:57.626226   37825 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0404 22:19:57.626230   37825 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0404 22:19:57.626234   37825 command_runner.go:130] > # ]
	I0404 22:19:57.626239   37825 command_runner.go:130] > # The port on which the metrics server will listen.
	I0404 22:19:57.626245   37825 command_runner.go:130] > # metrics_port = 9090
	I0404 22:19:57.626250   37825 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0404 22:19:57.626256   37825 command_runner.go:130] > # metrics_socket = ""
	I0404 22:19:57.626260   37825 command_runner.go:130] > # The certificate for the secure metrics server.
	I0404 22:19:57.626266   37825 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0404 22:19:57.626272   37825 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0404 22:19:57.626279   37825 command_runner.go:130] > # certificate on any modification event.
	I0404 22:19:57.626283   37825 command_runner.go:130] > # metrics_cert = ""
	I0404 22:19:57.626290   37825 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0404 22:19:57.626294   37825 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0404 22:19:57.626298   37825 command_runner.go:130] > # metrics_key = ""
	I0404 22:19:57.626305   37825 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0404 22:19:57.626309   37825 command_runner.go:130] > [crio.tracing]
	I0404 22:19:57.626315   37825 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0404 22:19:57.626322   37825 command_runner.go:130] > # enable_tracing = false
	I0404 22:19:57.626327   37825 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0404 22:19:57.626332   37825 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0404 22:19:57.626338   37825 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0404 22:19:57.626345   37825 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0404 22:19:57.626353   37825 command_runner.go:130] > # CRI-O NRI configuration.
	I0404 22:19:57.626359   37825 command_runner.go:130] > [crio.nri]
	I0404 22:19:57.626363   37825 command_runner.go:130] > # Globally enable or disable NRI.
	I0404 22:19:57.626367   37825 command_runner.go:130] > # enable_nri = false
	I0404 22:19:57.626371   37825 command_runner.go:130] > # NRI socket to listen on.
	I0404 22:19:57.626378   37825 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0404 22:19:57.626382   37825 command_runner.go:130] > # NRI plugin directory to use.
	I0404 22:19:57.626388   37825 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0404 22:19:57.626393   37825 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0404 22:19:57.626400   37825 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0404 22:19:57.626405   37825 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0404 22:19:57.626411   37825 command_runner.go:130] > # nri_disable_connections = false
	I0404 22:19:57.626416   37825 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0404 22:19:57.626423   37825 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0404 22:19:57.626428   37825 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0404 22:19:57.626434   37825 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0404 22:19:57.626440   37825 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0404 22:19:57.626444   37825 command_runner.go:130] > [crio.stats]
	I0404 22:19:57.626450   37825 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0404 22:19:57.626457   37825 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0404 22:19:57.626461   37825 command_runner.go:130] > # stats_collection_period = 0
	I0404 22:19:57.626915   37825 command_runner.go:130] ! time="2024-04-04 22:19:57.587462216Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0404 22:19:57.626941   37825 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0404 22:19:57.627094   37825 cni.go:84] Creating CNI manager for ""
	I0404 22:19:57.627109   37825 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0404 22:19:57.627118   37825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:19:57.627139   37825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-575162 NodeName:multinode-575162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:19:57.627275   37825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-575162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:19:57.627340   37825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:19:57.639202   37825 command_runner.go:130] > kubeadm
	I0404 22:19:57.639224   37825 command_runner.go:130] > kubectl
	I0404 22:19:57.639231   37825 command_runner.go:130] > kubelet
	I0404 22:19:57.641066   37825 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:19:57.641123   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:19:57.653092   37825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0404 22:19:57.673190   37825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:19:57.692809   37825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0404 22:19:57.712326   37825 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0404 22:19:57.716529   37825 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0404 22:19:57.716757   37825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:19:57.872016   37825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:19:57.888992   37825 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162 for IP: 192.168.39.203
	I0404 22:19:57.889017   37825 certs.go:194] generating shared ca certs ...
	I0404 22:19:57.889035   37825 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:19:57.889190   37825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:19:57.889226   37825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:19:57.889250   37825 certs.go:256] generating profile certs ...
	I0404 22:19:57.889335   37825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/client.key
	I0404 22:19:57.889393   37825 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key.777590d0
	I0404 22:19:57.889432   37825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key
	I0404 22:19:57.889443   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 22:19:57.889454   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 22:19:57.889466   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 22:19:57.889478   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 22:19:57.889488   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 22:19:57.889504   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 22:19:57.889516   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 22:19:57.889528   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 22:19:57.889577   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:19:57.889609   37825 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:19:57.889618   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:19:57.889640   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:19:57.889663   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:19:57.889683   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:19:57.889723   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:19:57.889753   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:57.889771   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 22:19:57.889782   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 22:19:57.890334   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:19:57.916230   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:19:57.941191   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:19:57.966631   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:19:57.991737   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:19:58.017186   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:19:58.041683   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:19:58.069009   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:19:58.094726   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:19:58.121695   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:19:58.147997   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:19:58.173393   37825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:19:58.191047   37825 ssh_runner.go:195] Run: openssl version
	I0404 22:19:58.197588   37825 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0404 22:19:58.197683   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:19:58.209539   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214523   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214694   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214736   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.221213   37825 command_runner.go:130] > b5213941
	I0404 22:19:58.221372   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:19:58.231699   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:19:58.243073   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.247940   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.247968   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.248003   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.254006   37825 command_runner.go:130] > 51391683
	I0404 22:19:58.254082   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:19:58.263623   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:19:58.274356   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.278951   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.278991   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.279021   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.284442   37825 command_runner.go:130] > 3ec20f2e
	I0404 22:19:58.284729   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:19:58.294274   37825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:19:58.298931   37825 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:19:58.298950   37825 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0404 22:19:58.298963   37825 command_runner.go:130] > Device: 253,1	Inode: 8386566     Links: 1
	I0404 22:19:58.298970   37825 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0404 22:19:58.298976   37825 command_runner.go:130] > Access: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298985   37825 command_runner.go:130] > Modify: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298991   37825 command_runner.go:130] > Change: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298999   37825 command_runner.go:130] >  Birth: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.299046   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:19:58.304642   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.304968   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:19:58.310605   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.310900   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:19:58.317327   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.317400   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:19:58.323074   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.323288   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:19:58.329536   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.329617   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:19:58.335847   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.335923   37825 kubeadm.go:391] StartCluster: {Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:19:58.336034   37825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:19:58.336077   37825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:19:58.379782   37825 command_runner.go:130] > 2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9
	I0404 22:19:58.379813   37825 command_runner.go:130] > 83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06
	I0404 22:19:58.379821   37825 command_runner.go:130] > b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5
	I0404 22:19:58.379831   37825 command_runner.go:130] > ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f
	I0404 22:19:58.379840   37825 command_runner.go:130] > 1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b
	I0404 22:19:58.379854   37825 command_runner.go:130] > 54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c
	I0404 22:19:58.379863   37825 command_runner.go:130] > 37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e
	I0404 22:19:58.379877   37825 command_runner.go:130] > cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65
	I0404 22:19:58.379907   37825 cri.go:89] found id: "2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9"
	I0404 22:19:58.379918   37825 cri.go:89] found id: "83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06"
	I0404 22:19:58.379923   37825 cri.go:89] found id: "b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5"
	I0404 22:19:58.379928   37825 cri.go:89] found id: "ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f"
	I0404 22:19:58.379935   37825 cri.go:89] found id: "1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b"
	I0404 22:19:58.379939   37825 cri.go:89] found id: "54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c"
	I0404 22:19:58.379943   37825 cri.go:89] found id: "37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e"
	I0404 22:19:58.379950   37825 cri.go:89] found id: "cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65"
	I0404 22:19:58.379954   37825 cri.go:89] found id: ""
	I0404 22:19:58.380011   37825 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.398290105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269287398266710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc792166-18fa-4126-9bd0-8ad447287911 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.398991326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca5fd0db-e440-4b7d-bee0-757181f2dcd4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.399072411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca5fd0db-e440-4b7d-bee0-757181f2dcd4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.399529883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca5fd0db-e440-4b7d-bee0-757181f2dcd4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.449606506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8c7b1cb-79d8-4064-bad1-6a54cf0e97be name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.449711941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8c7b1cb-79d8-4064-bad1-6a54cf0e97be name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.451480710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e95316e-e125-4ac5-b9c3-8f902fceb188 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.451924565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269287451898881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e95316e-e125-4ac5-b9c3-8f902fceb188 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.452600455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a8e4bd1-4b9e-4f07-abd6-6e3e170bdefd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.452679814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a8e4bd1-4b9e-4f07-abd6-6e3e170bdefd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.453124810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a8e4bd1-4b9e-4f07-abd6-6e3e170bdefd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.500094906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e8294b6-7469-4407-b748-ac4030ebc738 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.500194331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e8294b6-7469-4407-b748-ac4030ebc738 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.501699292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f39f87d-8d85-4f38-a0d5-9154e2d806d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.502106437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269287502084530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f39f87d-8d85-4f38-a0d5-9154e2d806d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.502887468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d2838ac-8aed-40bb-b339-85e4cd25f3b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.502940170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d2838ac-8aed-40bb-b339-85e4cd25f3b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.503396517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d2838ac-8aed-40bb-b339-85e4cd25f3b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.547632550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83a80d12-40cc-4e33-a1e3-63dd6b772582 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.547743144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83a80d12-40cc-4e33-a1e3-63dd6b772582 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.549073546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=401c3dd9-f7c6-428a-a0c3-a71783dafbeb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.549571016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269287549547514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=401c3dd9-f7c6-428a-a0c3-a71783dafbeb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.550372872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2def120c-515c-4a59-b363-ce8b1ed4fdc6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.550433228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2def120c-515c-4a59-b363-ce8b1ed4fdc6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:21:27 multinode-575162 crio[2846]: time="2024-04-04 22:21:27.550795656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2def120c-515c-4a59-b363-ce8b1ed4fdc6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2daad0a4e008b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      49 seconds ago       Running             busybox                   1                   3c705ed499326       busybox-7fdf7869d9-dlm6j
	d85cfd3d51d04       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   95ef7610dcf73       kindnet-l9sdd
	f95e6e6792b0f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   bb9ba1144e3a2       coredns-76f75df574-r5flx
	102e4a1df4286       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   d3d5bc97297e0       kube-proxy-p4qc2
	fb80f60bf1cdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a23f7dc5835a1       storage-provisioner
	672566204aa04       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   6d32b8a2d36b0       etcd-multinode-575162
	c6ec7f749f010       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   c9c1209088bb4       kube-apiserver-multinode-575162
	659d28fd4ccb2       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   f849c1f28f09e       kube-controller-manager-multinode-575162
	fd4d0929884ad       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   422d92b789114       kube-scheduler-multinode-575162
	192456d1920b3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   c7e895b75a604       busybox-7fdf7869d9-dlm6j
	2dce4432373d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c28d0930476da       storage-provisioner
	83e49da2db9e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   8aaf4ecaf186e       coredns-76f75df574-r5flx
	b6effb9553a51       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   b8e602765fb09       kindnet-l9sdd
	ffdc3c748508c       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   e83602b28499f       kube-proxy-p4qc2
	1c8f7d8794514       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   3792f7ece0d72       kube-controller-manager-multinode-575162
	54ccdf173a397       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   4018c8dd6629e       kube-scheduler-multinode-575162
	37301234a6dc1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   beb7585b145cd       etcd-multinode-575162
	cdff1c4750bae       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   ceacf97c23d9f       kube-apiserver-multinode-575162
	
	
	==> coredns [83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06] <==
	[INFO] 10.244.1.2:54098 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968283s
	[INFO] 10.244.1.2:38677 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126522s
	[INFO] 10.244.1.2:45073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099451s
	[INFO] 10.244.1.2:38610 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001466786s
	[INFO] 10.244.1.2:36266 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007049s
	[INFO] 10.244.1.2:46397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098746s
	[INFO] 10.244.1.2:33139 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080132s
	[INFO] 10.244.0.3:38244 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009703s
	[INFO] 10.244.0.3:54175 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077883s
	[INFO] 10.244.0.3:33752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117215s
	[INFO] 10.244.0.3:52462 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032671s
	[INFO] 10.244.1.2:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162733s
	[INFO] 10.244.1.2:48042 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113087s
	[INFO] 10.244.1.2:58404 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067235s
	[INFO] 10.244.1.2:55519 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110653s
	[INFO] 10.244.0.3:34341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081425s
	[INFO] 10.244.0.3:50706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139737s
	[INFO] 10.244.0.3:34366 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065691s
	[INFO] 10.244.0.3:48500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150261s
	[INFO] 10.244.1.2:34154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155481s
	[INFO] 10.244.1.2:44155 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129107s
	[INFO] 10.244.1.2:37095 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131297s
	[INFO] 10.244.1.2:49878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35002 - 53422 "HINFO IN 994327420128257834.6641705586587126964. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018400575s
	
	
	==> describe nodes <==
	Name:               multinode-575162
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-575162
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=multinode-575162
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_13_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-575162
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-575162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa4e7d2217784c3bb6e858eb20908b44
	  System UUID:                aa4e7d22-1778-4c3b-b6e8-58eb20908b44
	  Boot ID:                    b1c84359-b966-4d9c-94e3-8e33fb243db7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-dlm6j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 coredns-76f75df574-r5flx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m38s
	  kube-system                 etcd-multinode-575162                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m51s
	  kube-system                 kindnet-l9sdd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m38s
	  kube-system                 kube-apiserver-multinode-575162             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-controller-manager-multinode-575162    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-proxy-p4qc2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-scheduler-multinode-575162             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m36s                  kube-proxy       
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 7m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m58s (x8 over 7m58s)  kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s (x8 over 7m58s)  kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s (x7 over 7m58s)  kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m51s                  kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m51s                  kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s                  kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m38s                  node-controller  Node multinode-575162 event: Registered Node multinode-575162 in Controller
	  Normal  NodeReady                7m36s                  kubelet          Node multinode-575162 status is now: NodeReady
	  Normal  Starting                 88s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           71s                    node-controller  Node multinode-575162 event: Registered Node multinode-575162 in Controller
	
	
	Name:               multinode-575162-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-575162-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=multinode-575162
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T22_20_46_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:20:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-575162-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:21:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:20:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:20:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:20:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:20:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-575162-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa834ddf9dd9403b9314231d9a54ae9e
	  System UUID:                fa834ddf-9dd9-403b-9314-231d9a54ae9e
	  Boot ID:                    612ce0dd-6cab-46af-9ef6-e57ba44eca15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ldcpv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-z2j24               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m56s
	  kube-system                 kube-proxy-ggctb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m51s                  kube-proxy  
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m56s (x2 over 6m56s)  kubelet     Node multinode-575162-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s (x2 over 6m56s)  kubelet     Node multinode-575162-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s (x2 over 6m56s)  kubelet     Node multinode-575162-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m46s                  kubelet     Node multinode-575162-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  42s (x2 over 42s)      kubelet     Node multinode-575162-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x2 over 42s)      kubelet     Node multinode-575162-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x2 over 42s)      kubelet     Node multinode-575162-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                33s                    kubelet     Node multinode-575162-m02 status is now: NodeReady
	
	
	Name:               multinode-575162-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-575162-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=multinode-575162
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T22_21_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-575162-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:21:24 +0000   Thu, 04 Apr 2024 22:21:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:21:24 +0000   Thu, 04 Apr 2024 22:21:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:21:24 +0000   Thu, 04 Apr 2024 22:21:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:21:24 +0000   Thu, 04 Apr 2024 22:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    multinode-575162-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c53a30ed97147c7ab12973a87c45813
	  System UUID:                4c53a30e-d971-47c7-ab12-973a87c45813
	  Boot ID:                    e353c910-4814-46ee-90a5-46d83e73104a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tmn7c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m7s
	  kube-system                 kube-proxy-pcc2s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m2s                   kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m21s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m7s (x2 over 6m7s)    kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x2 over 6m7s)    kubelet     Node multinode-575162-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x2 over 6m7s)    kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m7s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m57s                  kubelet     Node multinode-575162-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m26s (x2 over 5m26s)  kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m26s (x2 over 5m26s)  kubelet     Node multinode-575162-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m26s (x2 over 5m26s)  kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m16s                  kubelet     Node multinode-575162-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-575162-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-575162-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-575162-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.122243] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163521] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.133869] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.292735] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.550089] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.059177] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.396745] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.759266] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.547724] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.086571] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.585037] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.107006] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 22:14] kauditd_printk_skb: 82 callbacks suppressed
	[Apr 4 22:19] systemd-fstab-generator[2765]: Ignoring "noauto" option for root device
	[  +0.169541] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +0.188499] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.160724] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.310102] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +8.620326] systemd-fstab-generator[2929]: Ignoring "noauto" option for root device
	[  +0.080862] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.888984] systemd-fstab-generator[3053]: Ignoring "noauto" option for root device
	[Apr 4 22:20] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.532292] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.922388] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[ +18.368961] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e] <==
	{"level":"info","ts":"2024-04-04T22:13:30.828855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:13:30.82898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:13:30.829296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:13:30.831743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-04-04T22:13:30.833442Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:13:30.845274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:13:30.836011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T22:13:30.859815Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-04T22:14:31.38322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.77246ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888170719143125443 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-575162-m02.17c333775d3ac571\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-575162-m02.17c333775d3ac571\" value_size:642 lease:3888170719143124392 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T22:14:31.383867Z","caller":"traceutil/trace.go:171","msg":"trace[966313568] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"259.646999ms","start":"2024-04-04T22:14:31.124183Z","end":"2024-04-04T22:14:31.38383Z","steps":["trace[966313568] 'process raft request'  (duration: 96.725395ms)","trace[966313568] 'compare'  (duration: 161.528254ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:14:31.383929Z","caller":"traceutil/trace.go:171","msg":"trace[355165613] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"194.061141ms","start":"2024-04-04T22:14:31.189773Z","end":"2024-04-04T22:14:31.383834Z","steps":["trace[355165613] 'process raft request'  (duration: 193.801312ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:15:20.723538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.636952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888170719143125867 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-575162-m03.17c33382db734c7d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-575162-m03.17c33382db734c7d\" value_size:646 lease:3888170719143125508 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T22:15:20.724167Z","caller":"traceutil/trace.go:171","msg":"trace[2066032871] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"249.905095ms","start":"2024-04-04T22:15:20.474239Z","end":"2024-04-04T22:15:20.724144Z","steps":["trace[2066032871] 'process raft request'  (duration: 87.528968ms)","trace[2066032871] 'compare'  (duration: 161.343296ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:15:20.724479Z","caller":"traceutil/trace.go:171","msg":"trace[440202101] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"163.439918ms","start":"2024-04-04T22:15:20.561029Z","end":"2024-04-04T22:15:20.724469Z","steps":["trace[440202101] 'process raft request'  (duration: 162.874447ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:15:28.522781Z","caller":"traceutil/trace.go:171","msg":"trace[1892569076] transaction","detail":"{read_only:false; response_revision:665; number_of_response:1; }","duration":"107.569828ms","start":"2024-04-04T22:15:28.415188Z","end":"2024-04-04T22:15:28.522758Z","steps":["trace[1892569076] 'process raft request'  (duration: 107.445858ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:18:17.039758Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-04T22:18:17.039887Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-575162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-04-04T22:18:17.039992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.040084Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.076788Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.076866Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-04T22:18:17.078287Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-04-04T22:18:17.085784Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:18:17.085936Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:18:17.085975Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-575162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> etcd [672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d] <==
	{"level":"info","ts":"2024-04-04T22:20:01.215741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:20:01.215751Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:20:01.218605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-04-04T22:20:01.218807Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-04-04T22:20:01.218971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:20:01.219023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:20:01.239969Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T22:20:01.240236Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T22:20:01.243855Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:20:01.244098Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:20:01.244117Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-04T22:20:02.258812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.258919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.259025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.259059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.259083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.25911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.259142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.265808Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-575162 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:20:02.265906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:20:02.267993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T22:20:02.279136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:20:02.280409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:20:02.28047Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:20:02.282222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	
	
	==> kernel <==
	 22:21:28 up 8 min,  0 users,  load average: 0.21, 0.19, 0.10
	Linux multinode-575162 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5] <==
	I0404 22:17:31.748443       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:17:41.753702       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:17:41.753740       1 main.go:227] handling current node
	I0404 22:17:41.753751       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:17:41.753757       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:17:41.753877       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:17:41.753906       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:17:51.767161       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:17:51.767209       1 main.go:227] handling current node
	I0404 22:17:51.767254       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:17:51.767264       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:17:51.767452       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:17:51.767484       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:18:01.780736       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:18:01.780786       1 main.go:227] handling current node
	I0404 22:18:01.780797       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:18:01.780802       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:18:01.780932       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:18:01.780960       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:18:11.795253       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:18:11.795413       1 main.go:227] handling current node
	I0404 22:18:11.795440       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:18:11.795460       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:18:11.795599       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:18:11.795619       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7] <==
	I0404 22:20:45.652992       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:20:55.666423       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:20:55.666518       1 main.go:227] handling current node
	I0404 22:20:55.666541       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:20:55.666558       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:20:55.666680       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:20:55.666700       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:21:05.671762       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:21:05.671880       1 main.go:227] handling current node
	I0404 22:21:05.671905       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:21:05.671924       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:21:05.672045       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:21:05.672065       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:21:15.681130       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:21:15.681184       1 main.go:227] handling current node
	I0404 22:21:15.681196       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:21:15.681202       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:21:15.681415       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:21:15.681445       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.2.0/24] 
	I0404 22:21:25.696770       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:21:25.696913       1 main.go:227] handling current node
	I0404 22:21:25.696947       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:21:25.696974       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:21:25.697156       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:21:25.697469       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183] <==
	I0404 22:20:03.637622       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0404 22:20:03.637636       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0404 22:20:03.637647       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0404 22:20:03.754570       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0404 22:20:03.754839       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 22:20:03.756586       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 22:20:03.778964       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 22:20:03.800686       1 aggregator.go:165] initial CRD sync complete...
	I0404 22:20:03.800742       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 22:20:03.800749       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 22:20:03.800755       1 cache.go:39] Caches are synced for autoregister controller
	I0404 22:20:03.804612       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 22:20:03.805669       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 22:20:03.820265       1 shared_informer.go:318] Caches are synced for configmaps
	I0404 22:20:03.820882       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0404 22:20:03.820933       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0404 22:20:03.875393       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0404 22:20:04.625276       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 22:20:05.655038       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0404 22:20:05.821010       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0404 22:20:05.840438       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0404 22:20:05.939673       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 22:20:05.953191       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0404 22:20:16.629902       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 22:20:16.930927       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65] <==
	W0404 22:18:17.063944       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.063980       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064009       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064037       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064067       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064096       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064130       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064161       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067059       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067098       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067124       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067149       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067173       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067208       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067235       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067733       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067770       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067797       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067828       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067854       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067891       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067954       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068089       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068100       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068123       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b] <==
	I0404 22:14:48.941693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.92921ms"
	I0404 22:14:48.941885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.92µs"
	I0404 22:15:20.727127       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:15:20.727613       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:15:20.748416       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.2.0/24"]
	I0404 22:15:20.772761       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pcc2s"
	I0404 22:15:20.772834       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tmn7c"
	I0404 22:15:24.712618       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-575162-m03"
	I0404 22:15:24.713023       1 event.go:376] "Event occurred" object="multinode-575162-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-575162-m03 event: Registered Node multinode-575162-m03 in Controller"
	I0404 22:15:30.352952       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:00.387032       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:01.537442       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:01.538787       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:16:01.553638       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.3.0/24"]
	I0404 22:16:11.150131       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:54.764681       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:54.765044       1 event.go:376] "Event occurred" object="multinode-575162-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-575162-m03 status is now: NodeNotReady"
	I0404 22:16:54.774496       1 event.go:376] "Event occurred" object="multinode-575162-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-575162-m02 status is now: NodeNotReady"
	I0404 22:16:54.782691       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-pcc2s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.793188       1 event.go:376] "Event occurred" object="kube-system/kindnet-z2j24" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.803577       1 event.go:376] "Event occurred" object="kube-system/kindnet-tmn7c" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.809983       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-ggctb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.823747       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-t8948" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.831505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.795662ms"
	I0404 22:16:54.831727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="81.294µs"
	
	
	==> kube-controller-manager [659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c] <==
	I0404 22:20:41.253722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.095113ms"
	I0404 22:20:41.254515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.335µs"
	I0404 22:20:41.254630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.735µs"
	I0404 22:20:45.484297       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m02\" does not exist"
	I0404 22:20:45.484810       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-t8948" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-t8948"
	I0404 22:20:45.502602       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m02" podCIDRs=["10.244.1.0/24"]
	I0404 22:20:46.829379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="126.352µs"
	I0404 22:20:47.377590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="119.798µs"
	I0404 22:20:47.427367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.108µs"
	I0404 22:20:47.434466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.417µs"
	I0404 22:20:47.445629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="83.749µs"
	I0404 22:20:47.455692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="36.413µs"
	I0404 22:20:47.461073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="68.134µs"
	I0404 22:20:47.461682       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-t8948" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-t8948"
	I0404 22:20:54.626706       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:20:54.651827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="128.846µs"
	I0404 22:20:54.666773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="83.023µs"
	I0404 22:20:56.698552       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ldcpv" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ldcpv"
	I0404 22:20:57.551615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.103006ms"
	I0404 22:20:57.551901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="96.165µs"
	I0404 22:21:14.181621       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:15.293985       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:21:15.294296       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:15.318906       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.2.0/24"]
	I0404 22:21:24.370237       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	
	
	==> kube-proxy [102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc] <==
	I0404 22:20:04.895085       1 server_others.go:72] "Using iptables proxy"
	I0404 22:20:04.934736       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	I0404 22:20:04.998906       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:20:04.998963       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:20:04.998986       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:20:05.002622       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:20:05.003214       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:20:05.003256       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:20:05.006411       1 config.go:188] "Starting service config controller"
	I0404 22:20:05.006474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:20:05.006516       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:20:05.006543       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:20:05.007218       1 config.go:315] "Starting node config controller"
	I0404 22:20:05.007261       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:20:05.106999       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:20:05.107466       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:20:05.106875       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f] <==
	I0404 22:13:51.045552       1 server_others.go:72] "Using iptables proxy"
	I0404 22:13:51.072269       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	I0404 22:13:51.121028       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:13:51.121051       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:13:51.121064       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:13:51.125967       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:13:51.126259       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:13:51.126382       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:13:51.128010       1 config.go:188] "Starting service config controller"
	I0404 22:13:51.129354       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:13:51.128544       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:13:51.129420       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:13:51.128906       1 config.go:315] "Starting node config controller"
	I0404 22:13:51.129429       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:13:51.230529       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:13:51.230570       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:13:51.230589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c] <==
	E0404 22:13:33.325587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 22:13:33.326151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 22:13:34.141611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 22:13:34.141673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 22:13:34.147008       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 22:13:34.147082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 22:13:34.154618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 22:13:34.154667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 22:13:34.219735       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0404 22:13:34.219801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0404 22:13:34.259974       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 22:13:34.260109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 22:13:34.299470       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 22:13:34.299507       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:13:34.485640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 22:13:34.485710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 22:13:34.530647       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0404 22:13:34.530999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0404 22:13:34.551854       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 22:13:34.552256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0404 22:13:36.501406       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:18:17.048442       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0404 22:18:17.048540       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0404 22:18:17.057163       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0404 22:18:17.059665       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3] <==
	I0404 22:20:01.938683       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:20:03.680051       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:20:03.680208       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:20:03.680291       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:20:03.680416       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:20:03.780927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0404 22:20:03.781055       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:20:03.785781       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:20:03.788407       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:20:03.788751       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:20:03.788430       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:20:03.889221       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 22:20:03 multinode-575162 kubelet[3060]: I0404 22:20:03.969280    3060 topology_manager.go:215] "Topology Admit Handler" podUID="47125dc9-91e8-4824-b956-06d1e759a21f" podNamespace="kube-system" podName="coredns-76f75df574-r5flx"
	Apr 04 22:20:03 multinode-575162 kubelet[3060]: I0404 22:20:03.969443    3060 topology_manager.go:215] "Topology Admit Handler" podUID="a92ce752-ae9c-4d7b-b869-63ce1e8f94e9" podNamespace="kube-system" podName="storage-provisioner"
	Apr 04 22:20:03 multinode-575162 kubelet[3060]: I0404 22:20:03.969587    3060 topology_manager.go:215] "Topology Admit Handler" podUID="8b403c4d-20e6-4b64-ae52-fcc9ac940d7e" podNamespace="default" podName="busybox-7fdf7869d9-dlm6j"
	Apr 04 22:20:03 multinode-575162 kubelet[3060]: I0404 22:20:03.984963    3060 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.021560    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0074f1f-69d4-49ab-9e2f-10c97b91ae01-xtables-lock\") pod \"kindnet-l9sdd\" (UID: \"d0074f1f-69d4-49ab-9e2f-10c97b91ae01\") " pod="kube-system/kindnet-l9sdd"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.021734    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0074f1f-69d4-49ab-9e2f-10c97b91ae01-lib-modules\") pod \"kindnet-l9sdd\" (UID: \"d0074f1f-69d4-49ab-9e2f-10c97b91ae01\") " pod="kube-system/kindnet-l9sdd"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.021912    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92ce752-ae9c-4d7b-b869-63ce1e8f94e9-tmp\") pod \"storage-provisioner\" (UID: \"a92ce752-ae9c-4d7b-b869-63ce1e8f94e9\") " pod="kube-system/storage-provisioner"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.022025    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6efa678-d0b7-4708-880c-933bbcf4179c-xtables-lock\") pod \"kube-proxy-p4qc2\" (UID: \"c6efa678-d0b7-4708-880c-933bbcf4179c\") " pod="kube-system/kube-proxy-p4qc2"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.022101    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d0074f1f-69d4-49ab-9e2f-10c97b91ae01-cni-cfg\") pod \"kindnet-l9sdd\" (UID: \"d0074f1f-69d4-49ab-9e2f-10c97b91ae01\") " pod="kube-system/kindnet-l9sdd"
	Apr 04 22:20:04 multinode-575162 kubelet[3060]: I0404 22:20:04.022237    3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6efa678-d0b7-4708-880c-933bbcf4179c-lib-modules\") pod \"kube-proxy-p4qc2\" (UID: \"c6efa678-d0b7-4708-880c-933bbcf4179c\") " pod="kube-system/kube-proxy-p4qc2"
	Apr 04 22:20:13 multinode-575162 kubelet[3060]: I0404 22:20:12.999970    3060 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.078007    3060 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 22:21:00 multinode-575162 kubelet[3060]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 22:21:00 multinode-575162 kubelet[3060]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 22:21:00 multinode-575162 kubelet[3060]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 22:21:00 multinode-575162 kubelet[3060]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.087886    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod707915b69936f4e0289a4380c88d06ba/crio-3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Error finding container 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Status 404 returned error can't find the container with id 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.088290    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda92ce752-ae9c-4d7b-b869-63ce1e8f94e9/crio-c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Error finding container c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Status 404 returned error can't find the container with id c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.089615    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8b403c4d-20e6-4b64-ae52-fcc9ac940d7e/crio-c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Error finding container c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Status 404 returned error can't find the container with id c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.090070    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0d53f9e041d32925a2c1c7a5f2bf7594/crio-4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Error finding container 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Status 404 returned error can't find the container with id 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.090428    3060 manager.go:1116] Failed to create existing container: /kubepods/podd0074f1f-69d4-49ab-9e2f-10c97b91ae01/crio-b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Error finding container b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Status 404 returned error can't find the container with id b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.090747    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc6efa678-d0b7-4708-880c-933bbcf4179c/crio-e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Error finding container e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Status 404 returned error can't find the container with id e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.091095    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod65788567edb4a3228a58bce04f0fbc42/crio-ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Error finding container ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Status 404 returned error can't find the container with id ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.091369    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2657366d5a79ca39aad046bc2b34b2e9/crio-beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Error finding container beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Status 404 returned error can't find the container with id beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282
	Apr 04 22:21:00 multinode-575162 kubelet[3060]: E0404 22:21:00.091708    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod47125dc9-91e8-4824-b956-06d1e759a21f/crio-8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Error finding container 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Status 404 returned error can't find the container with id 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:21:27.090535   38747 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16143-5297/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-575162 -n multinode-575162
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-575162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (315.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 stop
E0404 22:23:09.146377   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575162 stop: exit status 82 (2m0.490648232s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-575162-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-575162 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status
E0404 22:23:50.480644   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575162 status: exit status 3 (18.833858978s)

                                                
                                                
-- stdout --
	multinode-575162
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-575162-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:23:50.788434   39284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	E0404 22:23:50.788472   39284 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-575162 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-575162 -n multinode-575162
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-575162 logs -n 25: (1.619512228s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162:/home/docker/cp-test_multinode-575162-m02_multinode-575162.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162 sudo cat                                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m02_multinode-575162.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03:/home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162-m03 sudo cat                                   | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp testdata/cp-test.txt                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162:/home/docker/cp-test_multinode-575162-m03_multinode-575162.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162 sudo cat                                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02:/home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162-m02 sudo cat                                   | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-575162 node stop m03                                                          | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	| node    | multinode-575162 node start                                                             | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| stop    | -p multinode-575162                                                                     | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| start   | -p multinode-575162                                                                     | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:18 UTC | 04 Apr 24 22:21 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC |                     |
	| node    | multinode-575162 node delete                                                            | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC | 04 Apr 24 22:21 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-575162 stop                                                                   | multinode-575162 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:18:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:18:16.113826   37825 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:18:16.114087   37825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:18:16.114098   37825 out.go:304] Setting ErrFile to fd 2...
	I0404 22:18:16.114102   37825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:18:16.114270   37825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:18:16.114784   37825 out.go:298] Setting JSON to false
	I0404 22:18:16.115728   37825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3642,"bootTime":1712265455,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:18:16.115798   37825 start.go:139] virtualization: kvm guest
	I0404 22:18:16.118766   37825 out.go:177] * [multinode-575162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:18:16.121039   37825 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:18:16.120997   37825 notify.go:220] Checking for updates...
	I0404 22:18:16.122829   37825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:18:16.124498   37825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:18:16.126164   37825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:18:16.127893   37825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:18:16.129586   37825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:18:16.131992   37825 config.go:182] Loaded profile config "multinode-575162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:18:16.132136   37825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:18:16.132771   37825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:18:16.132821   37825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:18:16.149152   37825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0404 22:18:16.149553   37825 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:18:16.150214   37825 main.go:141] libmachine: Using API Version  1
	I0404 22:18:16.150246   37825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:18:16.150605   37825 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:18:16.150873   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.188758   37825 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:18:16.190536   37825 start.go:297] selected driver: kvm2
	I0404 22:18:16.190559   37825 start.go:901] validating driver "kvm2" against &{Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:18:16.190730   37825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:18:16.191140   37825 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:18:16.191230   37825 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:18:16.207425   37825 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:18:16.208113   37825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:18:16.208215   37825 cni.go:84] Creating CNI manager for ""
	I0404 22:18:16.208230   37825 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0404 22:18:16.208309   37825 start.go:340] cluster config:
	{Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:18:16.208464   37825 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:18:16.210727   37825 out.go:177] * Starting "multinode-575162" primary control-plane node in "multinode-575162" cluster
	I0404 22:18:16.212423   37825 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:18:16.212472   37825 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 22:18:16.212482   37825 cache.go:56] Caching tarball of preloaded images
	I0404 22:18:16.212634   37825 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:18:16.212656   37825 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0404 22:18:16.212820   37825 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/config.json ...
	I0404 22:18:16.213048   37825 start.go:360] acquireMachinesLock for multinode-575162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:18:16.213104   37825 start.go:364] duration metric: took 35.313µs to acquireMachinesLock for "multinode-575162"
	I0404 22:18:16.213120   37825 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:18:16.213125   37825 fix.go:54] fixHost starting: 
	I0404 22:18:16.213413   37825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:18:16.213448   37825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:18:16.228280   37825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0404 22:18:16.228752   37825 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:18:16.229302   37825 main.go:141] libmachine: Using API Version  1
	I0404 22:18:16.229330   37825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:18:16.229674   37825 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:18:16.229938   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.230095   37825 main.go:141] libmachine: (multinode-575162) Calling .GetState
	I0404 22:18:16.232025   37825 fix.go:112] recreateIfNeeded on multinode-575162: state=Running err=<nil>
	W0404 22:18:16.232051   37825 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:18:16.234548   37825 out.go:177] * Updating the running kvm2 "multinode-575162" VM ...
	I0404 22:18:16.236248   37825 machine.go:94] provisionDockerMachine start ...
	I0404 22:18:16.236271   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:18:16.236456   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.239103   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.239630   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.239650   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.239804   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.239980   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.240176   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.240325   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.240515   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.240697   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.240707   37825 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:18:16.354024   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-575162
	
	I0404 22:18:16.354058   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.354308   37825 buildroot.go:166] provisioning hostname "multinode-575162"
	I0404 22:18:16.354340   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.354590   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.357851   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.358338   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.358372   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.358507   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.358733   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.358945   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.359094   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.359263   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.359482   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.359502   37825 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-575162 && echo "multinode-575162" | sudo tee /etc/hostname
	I0404 22:18:16.484980   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-575162
	
	I0404 22:18:16.485047   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.487952   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.488433   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.488466   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.488705   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.488909   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.489118   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.489325   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.489516   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.489713   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.489731   37825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-575162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-575162/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-575162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:18:16.597837   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:18:16.597863   37825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:18:16.597910   37825 buildroot.go:174] setting up certificates
	I0404 22:18:16.597921   37825 provision.go:84] configureAuth start
	I0404 22:18:16.597932   37825 main.go:141] libmachine: (multinode-575162) Calling .GetMachineName
	I0404 22:18:16.598269   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:18:16.601285   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.601796   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.601818   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.602092   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.604618   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.605041   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.605071   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.605203   37825 provision.go:143] copyHostCerts
	I0404 22:18:16.605226   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:18:16.605257   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:18:16.605266   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:18:16.605328   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:18:16.605482   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:18:16.605515   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:18:16.605526   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:18:16.605572   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:18:16.605643   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:18:16.605666   37825 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:18:16.605676   37825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:18:16.605714   37825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:18:16.605824   37825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.multinode-575162 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-575162]
	I0404 22:18:16.702652   37825 provision.go:177] copyRemoteCerts
	I0404 22:18:16.702718   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:18:16.702740   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.705943   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.706453   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.706495   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.706761   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.706973   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.707209   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.707376   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:18:16.803957   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0404 22:18:16.804042   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:18:16.834039   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0404 22:18:16.834112   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0404 22:18:16.875473   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0404 22:18:16.875550   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:18:16.902409   37825 provision.go:87] duration metric: took 304.474569ms to configureAuth
	I0404 22:18:16.902444   37825 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:18:16.902676   37825 config.go:182] Loaded profile config "multinode-575162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:18:16.902769   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:18:16.906167   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.906653   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:18:16.906690   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:18:16.906870   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:18:16.907137   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.907354   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:18:16.907528   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:18:16.907705   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:18:16.907859   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:18:16.907873   37825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:19:47.643994   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:19:47.644026   37825 machine.go:97] duration metric: took 1m31.407760177s to provisionDockerMachine
	I0404 22:19:47.644057   37825 start.go:293] postStartSetup for "multinode-575162" (driver="kvm2")
	I0404 22:19:47.644077   37825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:19:47.644101   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.644476   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:19:47.644505   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.647785   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.648256   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.648292   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.648512   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.648703   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.648864   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.649062   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.737469   37825 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:19:47.742069   37825 command_runner.go:130] > NAME=Buildroot
	I0404 22:19:47.742091   37825 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0404 22:19:47.742097   37825 command_runner.go:130] > ID=buildroot
	I0404 22:19:47.742104   37825 command_runner.go:130] > VERSION_ID=2023.02.9
	I0404 22:19:47.742112   37825 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0404 22:19:47.742149   37825 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:19:47.742163   37825 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:19:47.742239   37825 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:19:47.742322   37825 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:19:47.742333   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /etc/ssl/certs/125542.pem
	I0404 22:19:47.742412   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:19:47.753182   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:19:47.780580   37825 start.go:296] duration metric: took 136.502653ms for postStartSetup
	I0404 22:19:47.780621   37825 fix.go:56] duration metric: took 1m31.567495889s for fixHost
	I0404 22:19:47.780641   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.783445   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.783880   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.783919   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.784076   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.784283   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.784451   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.784590   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.784720   37825 main.go:141] libmachine: Using SSH client type: native
	I0404 22:19:47.784871   37825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0404 22:19:47.784881   37825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:19:47.901537   37825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712269187.882279018
	
	I0404 22:19:47.901564   37825 fix.go:216] guest clock: 1712269187.882279018
	I0404 22:19:47.901574   37825 fix.go:229] Guest: 2024-04-04 22:19:47.882279018 +0000 UTC Remote: 2024-04-04 22:19:47.780625428 +0000 UTC m=+91.716975930 (delta=101.65359ms)
	I0404 22:19:47.901601   37825 fix.go:200] guest clock delta is within tolerance: 101.65359ms
	I0404 22:19:47.901606   37825 start.go:83] releasing machines lock for "multinode-575162", held for 1m31.688491214s
	I0404 22:19:47.901623   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.901951   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:19:47.904881   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.905260   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.905296   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.905460   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906046   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906227   37825 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:19:47.906317   37825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:19:47.906367   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.906415   37825 ssh_runner.go:195] Run: cat /version.json
	I0404 22:19:47.906442   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:19:47.908762   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.908972   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909114   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.909138   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909309   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.909354   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:47.909384   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:47.909499   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.909571   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:19:47.909657   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.909727   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:19:47.909777   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.909848   37825 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:19:47.909945   37825 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:19:47.985919   37825 command_runner.go:130] > {"iso_version": "v1.33.0-1712138767-18566", "kicbase_version": "v0.0.43-1711559786-18485", "minikube_version": "v1.33.0-beta.0", "commit": "5c97bd855810b9924fd5c0368bb36a4a341f7234"}
	I0404 22:19:47.986122   37825 ssh_runner.go:195] Run: systemctl --version
	I0404 22:19:48.021526   37825 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0404 22:19:48.022205   37825 command_runner.go:130] > systemd 252 (252)
	I0404 22:19:48.022240   37825 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0404 22:19:48.022310   37825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:19:48.186613   37825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0404 22:19:48.195413   37825 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0404 22:19:48.195459   37825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:19:48.195509   37825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:19:48.206186   37825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0404 22:19:48.206214   37825 start.go:494] detecting cgroup driver to use...
	I0404 22:19:48.206299   37825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:19:48.227089   37825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:19:48.242398   37825 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:19:48.242466   37825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:19:48.257288   37825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:19:48.272558   37825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:19:48.432045   37825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:19:48.591283   37825 docker.go:233] disabling docker service ...
	I0404 22:19:48.591358   37825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:19:48.612695   37825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:19:48.627975   37825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:19:48.781854   37825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:19:48.946842   37825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:19:48.964784   37825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:19:48.985597   37825 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0404 22:19:48.985652   37825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:19:48.985712   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:48.997310   37825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:19:48.997388   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.009512   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.021458   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.033814   37825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:19:49.045574   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.057334   37825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.069274   37825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:19:49.082363   37825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:19:49.093184   37825 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0404 22:19:49.093245   37825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:19:49.103678   37825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:19:49.253323   37825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:19:57.349108   37825 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.095744231s)
	I0404 22:19:57.349147   37825 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:19:57.349207   37825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:19:57.355111   37825 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0404 22:19:57.355140   37825 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0404 22:19:57.355147   37825 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0404 22:19:57.355154   37825 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0404 22:19:57.355158   37825 command_runner.go:130] > Access: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355164   37825 command_runner.go:130] > Modify: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355169   37825 command_runner.go:130] > Change: 2024-04-04 22:19:57.201575493 +0000
	I0404 22:19:57.355172   37825 command_runner.go:130] >  Birth: -
	I0404 22:19:57.355188   37825 start.go:562] Will wait 60s for crictl version
	I0404 22:19:57.355234   37825 ssh_runner.go:195] Run: which crictl
	I0404 22:19:57.359562   37825 command_runner.go:130] > /usr/bin/crictl
	I0404 22:19:57.359683   37825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:19:57.401287   37825 command_runner.go:130] > Version:  0.1.0
	I0404 22:19:57.401310   37825 command_runner.go:130] > RuntimeName:  cri-o
	I0404 22:19:57.401314   37825 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0404 22:19:57.401320   37825 command_runner.go:130] > RuntimeApiVersion:  v1
	I0404 22:19:57.401396   37825 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:19:57.401485   37825 ssh_runner.go:195] Run: crio --version
	I0404 22:19:57.433835   37825 command_runner.go:130] > crio version 1.29.1
	I0404 22:19:57.433867   37825 command_runner.go:130] > Version:        1.29.1
	I0404 22:19:57.433875   37825 command_runner.go:130] > GitCommit:      unknown
	I0404 22:19:57.433882   37825 command_runner.go:130] > GitCommitDate:  unknown
	I0404 22:19:57.433888   37825 command_runner.go:130] > GitTreeState:   clean
	I0404 22:19:57.433900   37825 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0404 22:19:57.433907   37825 command_runner.go:130] > GoVersion:      go1.21.6
	I0404 22:19:57.433913   37825 command_runner.go:130] > Compiler:       gc
	I0404 22:19:57.433921   37825 command_runner.go:130] > Platform:       linux/amd64
	I0404 22:19:57.433926   37825 command_runner.go:130] > Linkmode:       dynamic
	I0404 22:19:57.433941   37825 command_runner.go:130] > BuildTags:      
	I0404 22:19:57.433949   37825 command_runner.go:130] >   containers_image_ostree_stub
	I0404 22:19:57.433959   37825 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0404 22:19:57.433965   37825 command_runner.go:130] >   btrfs_noversion
	I0404 22:19:57.433973   37825 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0404 22:19:57.433977   37825 command_runner.go:130] >   libdm_no_deferred_remove
	I0404 22:19:57.433981   37825 command_runner.go:130] >   seccomp
	I0404 22:19:57.433985   37825 command_runner.go:130] > LDFlags:          unknown
	I0404 22:19:57.433994   37825 command_runner.go:130] > SeccompEnabled:   true
	I0404 22:19:57.434001   37825 command_runner.go:130] > AppArmorEnabled:  false
	I0404 22:19:57.434063   37825 ssh_runner.go:195] Run: crio --version
	I0404 22:19:57.465975   37825 command_runner.go:130] > crio version 1.29.1
	I0404 22:19:57.465997   37825 command_runner.go:130] > Version:        1.29.1
	I0404 22:19:57.466003   37825 command_runner.go:130] > GitCommit:      unknown
	I0404 22:19:57.466007   37825 command_runner.go:130] > GitCommitDate:  unknown
	I0404 22:19:57.466011   37825 command_runner.go:130] > GitTreeState:   clean
	I0404 22:19:57.466021   37825 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0404 22:19:57.466025   37825 command_runner.go:130] > GoVersion:      go1.21.6
	I0404 22:19:57.466030   37825 command_runner.go:130] > Compiler:       gc
	I0404 22:19:57.466036   37825 command_runner.go:130] > Platform:       linux/amd64
	I0404 22:19:57.466042   37825 command_runner.go:130] > Linkmode:       dynamic
	I0404 22:19:57.466048   37825 command_runner.go:130] > BuildTags:      
	I0404 22:19:57.466054   37825 command_runner.go:130] >   containers_image_ostree_stub
	I0404 22:19:57.466061   37825 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0404 22:19:57.466067   37825 command_runner.go:130] >   btrfs_noversion
	I0404 22:19:57.466081   37825 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0404 22:19:57.466093   37825 command_runner.go:130] >   libdm_no_deferred_remove
	I0404 22:19:57.466099   37825 command_runner.go:130] >   seccomp
	I0404 22:19:57.466105   37825 command_runner.go:130] > LDFlags:          unknown
	I0404 22:19:57.466112   37825 command_runner.go:130] > SeccompEnabled:   true
	I0404 22:19:57.466118   37825 command_runner.go:130] > AppArmorEnabled:  false
	I0404 22:19:57.469337   37825 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:19:57.470824   37825 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:19:57.473887   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:57.474276   37825 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:19:57.474299   37825 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:19:57.474521   37825 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:19:57.479273   37825 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0404 22:19:57.479361   37825 kubeadm.go:877] updating cluster {Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:19:57.479487   37825 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:19:57.479550   37825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:19:57.524835   37825 command_runner.go:130] > {
	I0404 22:19:57.524859   37825 command_runner.go:130] >   "images": [
	I0404 22:19:57.524864   37825 command_runner.go:130] >     {
	I0404 22:19:57.524871   37825 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0404 22:19:57.524876   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.524881   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0404 22:19:57.524890   37825 command_runner.go:130] >       ],
	I0404 22:19:57.524896   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.524918   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0404 22:19:57.524933   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0404 22:19:57.524942   37825 command_runner.go:130] >       ],
	I0404 22:19:57.524950   37825 command_runner.go:130] >       "size": "65291810",
	I0404 22:19:57.524960   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.524964   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.524971   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.524975   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.524980   37825 command_runner.go:130] >     },
	I0404 22:19:57.524984   37825 command_runner.go:130] >     {
	I0404 22:19:57.524996   37825 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0404 22:19:57.525004   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525012   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0404 22:19:57.525017   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525023   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525038   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0404 22:19:57.525051   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0404 22:19:57.525069   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525075   37825 command_runner.go:130] >       "size": "1363676",
	I0404 22:19:57.525079   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525089   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525093   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525098   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525102   37825 command_runner.go:130] >     },
	I0404 22:19:57.525108   37825 command_runner.go:130] >     {
	I0404 22:19:57.525114   37825 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0404 22:19:57.525122   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525136   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0404 22:19:57.525146   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525153   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525167   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0404 22:19:57.525177   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0404 22:19:57.525181   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525185   37825 command_runner.go:130] >       "size": "31470524",
	I0404 22:19:57.525189   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525196   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525200   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525208   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525214   37825 command_runner.go:130] >     },
	I0404 22:19:57.525224   37825 command_runner.go:130] >     {
	I0404 22:19:57.525235   37825 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0404 22:19:57.525244   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525255   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0404 22:19:57.525264   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525274   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525288   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0404 22:19:57.525305   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0404 22:19:57.525314   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525328   37825 command_runner.go:130] >       "size": "61245718",
	I0404 22:19:57.525336   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.525342   37825 command_runner.go:130] >       "username": "nonroot",
	I0404 22:19:57.525349   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525359   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525378   37825 command_runner.go:130] >     },
	I0404 22:19:57.525387   37825 command_runner.go:130] >     {
	I0404 22:19:57.525401   37825 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0404 22:19:57.525410   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525419   37825 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0404 22:19:57.525428   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525438   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525468   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0404 22:19:57.525483   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0404 22:19:57.525492   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525502   37825 command_runner.go:130] >       "size": "150779692",
	I0404 22:19:57.525510   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525520   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525526   37825 command_runner.go:130] >       },
	I0404 22:19:57.525533   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525539   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525549   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525557   37825 command_runner.go:130] >     },
	I0404 22:19:57.525562   37825 command_runner.go:130] >     {
	I0404 22:19:57.525576   37825 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0404 22:19:57.525586   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525597   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0404 22:19:57.525605   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525614   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525628   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0404 22:19:57.525639   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0404 22:19:57.525649   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525660   37825 command_runner.go:130] >       "size": "128508878",
	I0404 22:19:57.525666   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525676   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525685   37825 command_runner.go:130] >       },
	I0404 22:19:57.525694   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525708   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525718   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525726   37825 command_runner.go:130] >     },
	I0404 22:19:57.525729   37825 command_runner.go:130] >     {
	I0404 22:19:57.525747   37825 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0404 22:19:57.525781   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525790   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0404 22:19:57.525801   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525811   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.525826   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0404 22:19:57.525842   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0404 22:19:57.525851   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525859   37825 command_runner.go:130] >       "size": "123142962",
	I0404 22:19:57.525866   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.525873   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.525882   37825 command_runner.go:130] >       },
	I0404 22:19:57.525892   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.525901   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.525912   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.525920   37825 command_runner.go:130] >     },
	I0404 22:19:57.525929   37825 command_runner.go:130] >     {
	I0404 22:19:57.525942   37825 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0404 22:19:57.525949   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.525956   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0404 22:19:57.525966   37825 command_runner.go:130] >       ],
	I0404 22:19:57.525976   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526005   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0404 22:19:57.526020   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0404 22:19:57.526029   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526039   37825 command_runner.go:130] >       "size": "83634073",
	I0404 22:19:57.526047   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.526062   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526069   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526076   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.526081   37825 command_runner.go:130] >     },
	I0404 22:19:57.526087   37825 command_runner.go:130] >     {
	I0404 22:19:57.526097   37825 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0404 22:19:57.526106   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.526117   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0404 22:19:57.526126   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526137   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526153   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0404 22:19:57.526168   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0404 22:19:57.526178   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526188   37825 command_runner.go:130] >       "size": "60724018",
	I0404 22:19:57.526197   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.526206   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.526215   37825 command_runner.go:130] >       },
	I0404 22:19:57.526229   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526235   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526240   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.526248   37825 command_runner.go:130] >     },
	I0404 22:19:57.526255   37825 command_runner.go:130] >     {
	I0404 22:19:57.526269   37825 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0404 22:19:57.526279   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.526293   37825 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0404 22:19:57.526302   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526312   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.526326   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0404 22:19:57.526342   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0404 22:19:57.526350   37825 command_runner.go:130] >       ],
	I0404 22:19:57.526356   37825 command_runner.go:130] >       "size": "750414",
	I0404 22:19:57.526364   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.526375   37825 command_runner.go:130] >         "value": "65535"
	I0404 22:19:57.526384   37825 command_runner.go:130] >       },
	I0404 22:19:57.526391   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.526402   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.526412   37825 command_runner.go:130] >       "pinned": true
	I0404 22:19:57.526419   37825 command_runner.go:130] >     }
	I0404 22:19:57.526427   37825 command_runner.go:130] >   ]
	I0404 22:19:57.526433   37825 command_runner.go:130] > }
	I0404 22:19:57.526635   37825 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:19:57.526649   37825 crio.go:433] Images already preloaded, skipping extraction
	I0404 22:19:57.526706   37825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:19:57.565347   37825 command_runner.go:130] > {
	I0404 22:19:57.565376   37825 command_runner.go:130] >   "images": [
	I0404 22:19:57.565380   37825 command_runner.go:130] >     {
	I0404 22:19:57.565388   37825 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0404 22:19:57.565393   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565402   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0404 22:19:57.565406   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565410   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565427   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0404 22:19:57.565437   37825 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0404 22:19:57.565443   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565463   37825 command_runner.go:130] >       "size": "65291810",
	I0404 22:19:57.565473   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565478   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565488   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565495   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565503   37825 command_runner.go:130] >     },
	I0404 22:19:57.565507   37825 command_runner.go:130] >     {
	I0404 22:19:57.565516   37825 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0404 22:19:57.565520   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565528   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0404 22:19:57.565531   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565535   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565543   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0404 22:19:57.565550   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0404 22:19:57.565557   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565563   37825 command_runner.go:130] >       "size": "1363676",
	I0404 22:19:57.565573   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565589   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565599   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565609   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565616   37825 command_runner.go:130] >     },
	I0404 22:19:57.565619   37825 command_runner.go:130] >     {
	I0404 22:19:57.565631   37825 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0404 22:19:57.565637   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565643   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0404 22:19:57.565648   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565660   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565675   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0404 22:19:57.565693   37825 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0404 22:19:57.565702   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565712   37825 command_runner.go:130] >       "size": "31470524",
	I0404 22:19:57.565721   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565729   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.565733   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565739   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565742   37825 command_runner.go:130] >     },
	I0404 22:19:57.565746   37825 command_runner.go:130] >     {
	I0404 22:19:57.565751   37825 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0404 22:19:57.565756   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565761   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0404 22:19:57.565770   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565775   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565791   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0404 22:19:57.565852   37825 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0404 22:19:57.565864   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565868   37825 command_runner.go:130] >       "size": "61245718",
	I0404 22:19:57.565872   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.565876   37825 command_runner.go:130] >       "username": "nonroot",
	I0404 22:19:57.565883   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.565889   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.565897   37825 command_runner.go:130] >     },
	I0404 22:19:57.565906   37825 command_runner.go:130] >     {
	I0404 22:19:57.565930   37825 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0404 22:19:57.565940   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.565950   37825 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0404 22:19:57.565959   37825 command_runner.go:130] >       ],
	I0404 22:19:57.565969   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.565981   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0404 22:19:57.565994   37825 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0404 22:19:57.566003   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566010   37825 command_runner.go:130] >       "size": "150779692",
	I0404 22:19:57.566019   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566035   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566044   37825 command_runner.go:130] >       },
	I0404 22:19:57.566054   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566062   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566071   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566078   37825 command_runner.go:130] >     },
	I0404 22:19:57.566082   37825 command_runner.go:130] >     {
	I0404 22:19:57.566094   37825 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0404 22:19:57.566104   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566117   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0404 22:19:57.566126   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566135   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566150   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0404 22:19:57.566165   37825 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0404 22:19:57.566171   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566176   37825 command_runner.go:130] >       "size": "128508878",
	I0404 22:19:57.566185   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566195   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566201   37825 command_runner.go:130] >       },
	I0404 22:19:57.566207   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566217   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566226   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566234   37825 command_runner.go:130] >     },
	I0404 22:19:57.566242   37825 command_runner.go:130] >     {
	I0404 22:19:57.566253   37825 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0404 22:19:57.566263   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566272   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0404 22:19:57.566280   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566290   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566306   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0404 22:19:57.566322   37825 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0404 22:19:57.566331   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566341   37825 command_runner.go:130] >       "size": "123142962",
	I0404 22:19:57.566350   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566359   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566367   37825 command_runner.go:130] >       },
	I0404 22:19:57.566385   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566394   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566404   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566413   37825 command_runner.go:130] >     },
	I0404 22:19:57.566421   37825 command_runner.go:130] >     {
	I0404 22:19:57.566431   37825 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0404 22:19:57.566439   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566451   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0404 22:19:57.566459   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566469   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566503   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0404 22:19:57.566518   37825 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0404 22:19:57.566522   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566527   37825 command_runner.go:130] >       "size": "83634073",
	I0404 22:19:57.566533   37825 command_runner.go:130] >       "uid": null,
	I0404 22:19:57.566539   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566546   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566554   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566560   37825 command_runner.go:130] >     },
	I0404 22:19:57.566565   37825 command_runner.go:130] >     {
	I0404 22:19:57.566577   37825 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0404 22:19:57.566587   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566595   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0404 22:19:57.566602   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566609   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566623   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0404 22:19:57.566636   37825 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0404 22:19:57.566642   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566649   37825 command_runner.go:130] >       "size": "60724018",
	I0404 22:19:57.566656   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566663   37825 command_runner.go:130] >         "value": "0"
	I0404 22:19:57.566669   37825 command_runner.go:130] >       },
	I0404 22:19:57.566680   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566687   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566695   37825 command_runner.go:130] >       "pinned": false
	I0404 22:19:57.566704   37825 command_runner.go:130] >     },
	I0404 22:19:57.566717   37825 command_runner.go:130] >     {
	I0404 22:19:57.566730   37825 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0404 22:19:57.566739   37825 command_runner.go:130] >       "repoTags": [
	I0404 22:19:57.566748   37825 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0404 22:19:57.566757   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566765   37825 command_runner.go:130] >       "repoDigests": [
	I0404 22:19:57.566780   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0404 22:19:57.566796   37825 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0404 22:19:57.566812   37825 command_runner.go:130] >       ],
	I0404 22:19:57.566823   37825 command_runner.go:130] >       "size": "750414",
	I0404 22:19:57.566833   37825 command_runner.go:130] >       "uid": {
	I0404 22:19:57.566840   37825 command_runner.go:130] >         "value": "65535"
	I0404 22:19:57.566849   37825 command_runner.go:130] >       },
	I0404 22:19:57.566856   37825 command_runner.go:130] >       "username": "",
	I0404 22:19:57.566866   37825 command_runner.go:130] >       "spec": null,
	I0404 22:19:57.566874   37825 command_runner.go:130] >       "pinned": true
	I0404 22:19:57.566882   37825 command_runner.go:130] >     }
	I0404 22:19:57.566887   37825 command_runner.go:130] >   ]
	I0404 22:19:57.566895   37825 command_runner.go:130] > }
	I0404 22:19:57.567039   37825 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:19:57.567052   37825 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:19:57.567061   37825 kubeadm.go:928] updating node { 192.168.39.203 8443 v1.29.3 crio true true} ...
	I0404 22:19:57.567172   37825 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-575162 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:19:57.567251   37825 ssh_runner.go:195] Run: crio config
	I0404 22:19:57.617542   37825 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0404 22:19:57.617572   37825 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0404 22:19:57.617581   37825 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0404 22:19:57.617586   37825 command_runner.go:130] > #
	I0404 22:19:57.617610   37825 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0404 22:19:57.617619   37825 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0404 22:19:57.617627   37825 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0404 22:19:57.617636   37825 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0404 22:19:57.617641   37825 command_runner.go:130] > # reload'.
	I0404 22:19:57.617656   37825 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0404 22:19:57.617666   37825 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0404 22:19:57.617679   37825 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0404 22:19:57.617690   37825 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0404 22:19:57.617697   37825 command_runner.go:130] > [crio]
	I0404 22:19:57.617708   37825 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0404 22:19:57.617720   37825 command_runner.go:130] > # containers images, in this directory.
	I0404 22:19:57.617743   37825 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0404 22:19:57.617793   37825 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0404 22:19:57.617890   37825 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0404 22:19:57.617916   37825 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0404 22:19:57.618074   37825 command_runner.go:130] > # imagestore = ""
	I0404 22:19:57.618111   37825 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0404 22:19:57.618126   37825 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0404 22:19:57.618262   37825 command_runner.go:130] > storage_driver = "overlay"
	I0404 22:19:57.618280   37825 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0404 22:19:57.618290   37825 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0404 22:19:57.618305   37825 command_runner.go:130] > storage_option = [
	I0404 22:19:57.618469   37825 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0404 22:19:57.618503   37825 command_runner.go:130] > ]
	I0404 22:19:57.618518   37825 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0404 22:19:57.618531   37825 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0404 22:19:57.618864   37825 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0404 22:19:57.618882   37825 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0404 22:19:57.618893   37825 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0404 22:19:57.618901   37825 command_runner.go:130] > # always happen on a node reboot
	I0404 22:19:57.619105   37825 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0404 22:19:57.619131   37825 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0404 22:19:57.619144   37825 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0404 22:19:57.619155   37825 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0404 22:19:57.619260   37825 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0404 22:19:57.619276   37825 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0404 22:19:57.619289   37825 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0404 22:19:57.619466   37825 command_runner.go:130] > # internal_wipe = true
	I0404 22:19:57.619487   37825 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0404 22:19:57.619496   37825 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0404 22:19:57.619768   37825 command_runner.go:130] > # internal_repair = false
	I0404 22:19:57.619786   37825 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0404 22:19:57.619796   37825 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0404 22:19:57.619804   37825 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0404 22:19:57.620097   37825 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0404 22:19:57.620129   37825 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0404 22:19:57.620136   37825 command_runner.go:130] > [crio.api]
	I0404 22:19:57.620144   37825 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0404 22:19:57.620406   37825 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0404 22:19:57.620424   37825 command_runner.go:130] > # IP address on which the stream server will listen.
	I0404 22:19:57.620857   37825 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0404 22:19:57.620882   37825 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0404 22:19:57.620891   37825 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0404 22:19:57.621105   37825 command_runner.go:130] > # stream_port = "0"
	I0404 22:19:57.621124   37825 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0404 22:19:57.621318   37825 command_runner.go:130] > # stream_enable_tls = false
	I0404 22:19:57.621338   37825 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0404 22:19:57.621542   37825 command_runner.go:130] > # stream_idle_timeout = ""
	I0404 22:19:57.621558   37825 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0404 22:19:57.621568   37825 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0404 22:19:57.621577   37825 command_runner.go:130] > # minutes.
	I0404 22:19:57.621855   37825 command_runner.go:130] > # stream_tls_cert = ""
	I0404 22:19:57.621870   37825 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0404 22:19:57.621881   37825 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0404 22:19:57.621991   37825 command_runner.go:130] > # stream_tls_key = ""
	I0404 22:19:57.622006   37825 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0404 22:19:57.622016   37825 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0404 22:19:57.622040   37825 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0404 22:19:57.622051   37825 command_runner.go:130] > # stream_tls_ca = ""
	I0404 22:19:57.622062   37825 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0404 22:19:57.622070   37825 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0404 22:19:57.622086   37825 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0404 22:19:57.622104   37825 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0404 22:19:57.622115   37825 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0404 22:19:57.622128   37825 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0404 22:19:57.622137   37825 command_runner.go:130] > [crio.runtime]
	I0404 22:19:57.622147   37825 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0404 22:19:57.622160   37825 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0404 22:19:57.622168   37825 command_runner.go:130] > # "nofile=1024:2048"
	I0404 22:19:57.622176   37825 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0404 22:19:57.622184   37825 command_runner.go:130] > # default_ulimits = [
	I0404 22:19:57.622190   37825 command_runner.go:130] > # ]
	I0404 22:19:57.622202   37825 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0404 22:19:57.622214   37825 command_runner.go:130] > # no_pivot = false
	I0404 22:19:57.622225   37825 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0404 22:19:57.622238   37825 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0404 22:19:57.622249   37825 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0404 22:19:57.622260   37825 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0404 22:19:57.622265   37825 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0404 22:19:57.622280   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0404 22:19:57.622292   37825 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0404 22:19:57.622300   37825 command_runner.go:130] > # Cgroup setting for conmon
	I0404 22:19:57.622312   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0404 22:19:57.622323   37825 command_runner.go:130] > conmon_cgroup = "pod"
	I0404 22:19:57.622333   37825 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0404 22:19:57.622344   37825 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0404 22:19:57.622361   37825 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0404 22:19:57.622369   37825 command_runner.go:130] > conmon_env = [
	I0404 22:19:57.622379   37825 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0404 22:19:57.622388   37825 command_runner.go:130] > ]
	I0404 22:19:57.622397   37825 command_runner.go:130] > # Additional environment variables to set for all the
	I0404 22:19:57.622409   37825 command_runner.go:130] > # containers. These are overridden if set in the
	I0404 22:19:57.622421   37825 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0404 22:19:57.622428   37825 command_runner.go:130] > # default_env = [
	I0404 22:19:57.622434   37825 command_runner.go:130] > # ]
	I0404 22:19:57.622446   37825 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0404 22:19:57.622461   37825 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0404 22:19:57.622471   37825 command_runner.go:130] > # selinux = false
	I0404 22:19:57.622499   37825 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0404 22:19:57.622518   37825 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0404 22:19:57.622528   37825 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0404 22:19:57.622538   37825 command_runner.go:130] > # seccomp_profile = ""
	I0404 22:19:57.622548   37825 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0404 22:19:57.622561   37825 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0404 22:19:57.622579   37825 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0404 22:19:57.622591   37825 command_runner.go:130] > # which might increase security.
	I0404 22:19:57.622602   37825 command_runner.go:130] > # This option is currently deprecated,
	I0404 22:19:57.622611   37825 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0404 22:19:57.622621   37825 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0404 22:19:57.622628   37825 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0404 22:19:57.622640   37825 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0404 22:19:57.622654   37825 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0404 22:19:57.622665   37825 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0404 22:19:57.622677   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.622688   37825 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0404 22:19:57.622699   37825 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0404 22:19:57.622710   37825 command_runner.go:130] > # the cgroup blockio controller.
	I0404 22:19:57.622720   37825 command_runner.go:130] > # blockio_config_file = ""
	I0404 22:19:57.622731   37825 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0404 22:19:57.622741   37825 command_runner.go:130] > # blockio parameters.
	I0404 22:19:57.622748   37825 command_runner.go:130] > # blockio_reload = false
	I0404 22:19:57.622761   37825 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0404 22:19:57.622771   37825 command_runner.go:130] > # irqbalance daemon.
	I0404 22:19:57.622781   37825 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0404 22:19:57.622795   37825 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0404 22:19:57.622809   37825 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0404 22:19:57.622823   37825 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0404 22:19:57.622860   37825 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0404 22:19:57.622875   37825 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0404 22:19:57.622890   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.622899   37825 command_runner.go:130] > # rdt_config_file = ""
	I0404 22:19:57.622905   37825 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0404 22:19:57.622910   37825 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0404 22:19:57.622946   37825 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0404 22:19:57.622962   37825 command_runner.go:130] > # separate_pull_cgroup = ""
	I0404 22:19:57.622975   37825 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0404 22:19:57.622989   37825 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0404 22:19:57.622998   37825 command_runner.go:130] > # will be added.
	I0404 22:19:57.623005   37825 command_runner.go:130] > # default_capabilities = [
	I0404 22:19:57.623015   37825 command_runner.go:130] > # 	"CHOWN",
	I0404 22:19:57.623021   37825 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0404 22:19:57.623030   37825 command_runner.go:130] > # 	"FSETID",
	I0404 22:19:57.623035   37825 command_runner.go:130] > # 	"FOWNER",
	I0404 22:19:57.623045   37825 command_runner.go:130] > # 	"SETGID",
	I0404 22:19:57.623050   37825 command_runner.go:130] > # 	"SETUID",
	I0404 22:19:57.623060   37825 command_runner.go:130] > # 	"SETPCAP",
	I0404 22:19:57.623067   37825 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0404 22:19:57.623076   37825 command_runner.go:130] > # 	"KILL",
	I0404 22:19:57.623084   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623097   37825 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0404 22:19:57.623111   37825 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0404 22:19:57.623121   37825 command_runner.go:130] > # add_inheritable_capabilities = false
	I0404 22:19:57.623133   37825 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0404 22:19:57.623143   37825 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0404 22:19:57.623152   37825 command_runner.go:130] > default_sysctls = [
	I0404 22:19:57.623159   37825 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0404 22:19:57.623169   37825 command_runner.go:130] > ]
	I0404 22:19:57.623177   37825 command_runner.go:130] > # List of devices on the host that a
	I0404 22:19:57.623188   37825 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0404 22:19:57.623197   37825 command_runner.go:130] > # allowed_devices = [
	I0404 22:19:57.623203   37825 command_runner.go:130] > # 	"/dev/fuse",
	I0404 22:19:57.623211   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623218   37825 command_runner.go:130] > # List of additional devices. specified as
	I0404 22:19:57.623229   37825 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0404 22:19:57.623239   37825 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0404 22:19:57.623252   37825 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0404 22:19:57.623262   37825 command_runner.go:130] > # additional_devices = [
	I0404 22:19:57.623268   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623279   37825 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0404 22:19:57.623286   37825 command_runner.go:130] > # cdi_spec_dirs = [
	I0404 22:19:57.623301   37825 command_runner.go:130] > # 	"/etc/cdi",
	I0404 22:19:57.623309   37825 command_runner.go:130] > # 	"/var/run/cdi",
	I0404 22:19:57.623312   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623321   37825 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0404 22:19:57.623334   37825 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0404 22:19:57.623344   37825 command_runner.go:130] > # Defaults to false.
	I0404 22:19:57.623352   37825 command_runner.go:130] > # device_ownership_from_security_context = false
	I0404 22:19:57.623364   37825 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0404 22:19:57.623377   37825 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0404 22:19:57.623386   37825 command_runner.go:130] > # hooks_dir = [
	I0404 22:19:57.623398   37825 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0404 22:19:57.623407   37825 command_runner.go:130] > # ]
	I0404 22:19:57.623417   37825 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0404 22:19:57.623431   37825 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0404 22:19:57.623442   37825 command_runner.go:130] > # its default mounts from the following two files:
	I0404 22:19:57.623447   37825 command_runner.go:130] > #
	I0404 22:19:57.623460   37825 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0404 22:19:57.623473   37825 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0404 22:19:57.623483   37825 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0404 22:19:57.623487   37825 command_runner.go:130] > #
	I0404 22:19:57.623495   37825 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0404 22:19:57.623508   37825 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0404 22:19:57.623522   37825 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0404 22:19:57.623532   37825 command_runner.go:130] > #      only add mounts it finds in this file.
	I0404 22:19:57.623539   37825 command_runner.go:130] > #
	I0404 22:19:57.623546   37825 command_runner.go:130] > # default_mounts_file = ""
	I0404 22:19:57.623566   37825 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0404 22:19:57.623575   37825 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0404 22:19:57.623581   37825 command_runner.go:130] > pids_limit = 1024
	I0404 22:19:57.623594   37825 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0404 22:19:57.623608   37825 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0404 22:19:57.623621   37825 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0404 22:19:57.623636   37825 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0404 22:19:57.623645   37825 command_runner.go:130] > # log_size_max = -1
	I0404 22:19:57.623655   37825 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0404 22:19:57.623661   37825 command_runner.go:130] > # log_to_journald = false
	I0404 22:19:57.623677   37825 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0404 22:19:57.623690   37825 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0404 22:19:57.623701   37825 command_runner.go:130] > # Path to directory for container attach sockets.
	I0404 22:19:57.623710   37825 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0404 22:19:57.623721   37825 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0404 22:19:57.623732   37825 command_runner.go:130] > # bind_mount_prefix = ""
	I0404 22:19:57.623742   37825 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0404 22:19:57.623749   37825 command_runner.go:130] > # read_only = false
	I0404 22:19:57.623759   37825 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0404 22:19:57.623771   37825 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0404 22:19:57.623782   37825 command_runner.go:130] > # live configuration reload.
	I0404 22:19:57.623791   37825 command_runner.go:130] > # log_level = "info"
	I0404 22:19:57.623799   37825 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0404 22:19:57.623810   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.623819   37825 command_runner.go:130] > # log_filter = ""
	I0404 22:19:57.623828   37825 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0404 22:19:57.623842   37825 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0404 22:19:57.623852   37825 command_runner.go:130] > # separated by comma.
	I0404 22:19:57.623868   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623877   37825 command_runner.go:130] > # uid_mappings = ""
	I0404 22:19:57.623887   37825 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0404 22:19:57.623900   37825 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0404 22:19:57.623909   37825 command_runner.go:130] > # separated by comma.
	I0404 22:19:57.623919   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623927   37825 command_runner.go:130] > # gid_mappings = ""
	I0404 22:19:57.623936   37825 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0404 22:19:57.623951   37825 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0404 22:19:57.623964   37825 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0404 22:19:57.623978   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.623984   37825 command_runner.go:130] > # minimum_mappable_uid = -1
	I0404 22:19:57.623998   37825 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0404 22:19:57.624011   37825 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0404 22:19:57.624024   37825 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0404 22:19:57.624039   37825 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0404 22:19:57.624046   37825 command_runner.go:130] > # minimum_mappable_gid = -1
	I0404 22:19:57.624059   37825 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0404 22:19:57.624078   37825 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0404 22:19:57.624090   37825 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0404 22:19:57.624100   37825 command_runner.go:130] > # ctr_stop_timeout = 30
	I0404 22:19:57.624114   37825 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0404 22:19:57.624133   37825 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0404 22:19:57.624145   37825 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0404 22:19:57.624155   37825 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0404 22:19:57.624164   37825 command_runner.go:130] > drop_infra_ctr = false
	I0404 22:19:57.624174   37825 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0404 22:19:57.624187   37825 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0404 22:19:57.624204   37825 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0404 22:19:57.624214   37825 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0404 22:19:57.624226   37825 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0404 22:19:57.624238   37825 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0404 22:19:57.624249   37825 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0404 22:19:57.624257   37825 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0404 22:19:57.624263   37825 command_runner.go:130] > # shared_cpuset = ""
	I0404 22:19:57.624276   37825 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0404 22:19:57.624288   37825 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0404 22:19:57.624297   37825 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0404 22:19:57.624309   37825 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0404 22:19:57.624318   37825 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0404 22:19:57.624328   37825 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0404 22:19:57.624340   37825 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0404 22:19:57.624411   37825 command_runner.go:130] > # enable_criu_support = false
	I0404 22:19:57.624433   37825 command_runner.go:130] > # Enable/disable the generation of the container,
	I0404 22:19:57.624444   37825 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0404 22:19:57.624451   37825 command_runner.go:130] > # enable_pod_events = false
	I0404 22:19:57.624462   37825 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0404 22:19:57.624476   37825 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0404 22:19:57.624488   37825 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0404 22:19:57.624498   37825 command_runner.go:130] > # default_runtime = "runc"
	I0404 22:19:57.624508   37825 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0404 22:19:57.624524   37825 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0404 22:19:57.624541   37825 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0404 22:19:57.624553   37825 command_runner.go:130] > # creation as a file is not desired either.
	I0404 22:19:57.624579   37825 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0404 22:19:57.624590   37825 command_runner.go:130] > # the hostname is being managed dynamically.
	I0404 22:19:57.624600   37825 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0404 22:19:57.624609   37825 command_runner.go:130] > # ]
	I0404 22:19:57.624619   37825 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0404 22:19:57.624633   37825 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0404 22:19:57.624646   37825 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0404 22:19:57.624658   37825 command_runner.go:130] > # Each entry in the table should follow the format:
	I0404 22:19:57.624663   37825 command_runner.go:130] > #
	I0404 22:19:57.624671   37825 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0404 22:19:57.624683   37825 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0404 22:19:57.624739   37825 command_runner.go:130] > # runtime_type = "oci"
	I0404 22:19:57.624773   37825 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0404 22:19:57.624782   37825 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0404 22:19:57.624786   37825 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0404 22:19:57.624792   37825 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0404 22:19:57.624796   37825 command_runner.go:130] > # monitor_env = []
	I0404 22:19:57.624801   37825 command_runner.go:130] > # privileged_without_host_devices = false
	I0404 22:19:57.624808   37825 command_runner.go:130] > # allowed_annotations = []
	I0404 22:19:57.624813   37825 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0404 22:19:57.624819   37825 command_runner.go:130] > # Where:
	I0404 22:19:57.624824   37825 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0404 22:19:57.624832   37825 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0404 22:19:57.624840   37825 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0404 22:19:57.624846   37825 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0404 22:19:57.624850   37825 command_runner.go:130] > #   in $PATH.
	I0404 22:19:57.624857   37825 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0404 22:19:57.624864   37825 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0404 22:19:57.624870   37825 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0404 22:19:57.624877   37825 command_runner.go:130] > #   state.
	I0404 22:19:57.624883   37825 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0404 22:19:57.624891   37825 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0404 22:19:57.624897   37825 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0404 22:19:57.624904   37825 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0404 22:19:57.624910   37825 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0404 22:19:57.624918   37825 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0404 22:19:57.624928   37825 command_runner.go:130] > #   The currently recognized values are:
	I0404 22:19:57.624937   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0404 22:19:57.624944   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0404 22:19:57.624951   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0404 22:19:57.624956   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0404 22:19:57.624966   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0404 22:19:57.624972   37825 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0404 22:19:57.624982   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0404 22:19:57.624990   37825 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0404 22:19:57.624996   37825 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0404 22:19:57.625002   37825 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0404 22:19:57.625006   37825 command_runner.go:130] > #   deprecated option "conmon".
	I0404 22:19:57.625017   37825 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0404 22:19:57.625024   37825 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0404 22:19:57.625030   37825 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0404 22:19:57.625038   37825 command_runner.go:130] > #   should be moved to the container's cgroup
	I0404 22:19:57.625044   37825 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0404 22:19:57.625049   37825 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0404 22:19:57.625057   37825 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0404 22:19:57.625062   37825 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0404 22:19:57.625067   37825 command_runner.go:130] > #
	I0404 22:19:57.625072   37825 command_runner.go:130] > # Using the seccomp notifier feature:
	I0404 22:19:57.625075   37825 command_runner.go:130] > #
	I0404 22:19:57.625080   37825 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0404 22:19:57.625090   37825 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0404 22:19:57.625093   37825 command_runner.go:130] > #
	I0404 22:19:57.625101   37825 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0404 22:19:57.625109   37825 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0404 22:19:57.625112   37825 command_runner.go:130] > #
	I0404 22:19:57.625118   37825 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0404 22:19:57.625124   37825 command_runner.go:130] > # feature.
	I0404 22:19:57.625127   37825 command_runner.go:130] > #
	I0404 22:19:57.625132   37825 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0404 22:19:57.625138   37825 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0404 22:19:57.625146   37825 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0404 22:19:57.625152   37825 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0404 22:19:57.625166   37825 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0404 22:19:57.625172   37825 command_runner.go:130] > #
	I0404 22:19:57.625177   37825 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0404 22:19:57.625191   37825 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0404 22:19:57.625197   37825 command_runner.go:130] > #
	I0404 22:19:57.625203   37825 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0404 22:19:57.625210   37825 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0404 22:19:57.625214   37825 command_runner.go:130] > #
	I0404 22:19:57.625219   37825 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0404 22:19:57.625227   37825 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0404 22:19:57.625230   37825 command_runner.go:130] > # limitation.
	I0404 22:19:57.625234   37825 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0404 22:19:57.625241   37825 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0404 22:19:57.625245   37825 command_runner.go:130] > runtime_type = "oci"
	I0404 22:19:57.625249   37825 command_runner.go:130] > runtime_root = "/run/runc"
	I0404 22:19:57.625253   37825 command_runner.go:130] > runtime_config_path = ""
	I0404 22:19:57.625257   37825 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0404 22:19:57.625261   37825 command_runner.go:130] > monitor_cgroup = "pod"
	I0404 22:19:57.625265   37825 command_runner.go:130] > monitor_exec_cgroup = ""
	I0404 22:19:57.625269   37825 command_runner.go:130] > monitor_env = [
	I0404 22:19:57.625274   37825 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0404 22:19:57.625279   37825 command_runner.go:130] > ]
	I0404 22:19:57.625284   37825 command_runner.go:130] > privileged_without_host_devices = false
	I0404 22:19:57.625293   37825 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0404 22:19:57.625298   37825 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0404 22:19:57.625304   37825 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0404 22:19:57.625311   37825 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0404 22:19:57.625321   37825 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0404 22:19:57.625326   37825 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0404 22:19:57.625342   37825 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0404 22:19:57.625352   37825 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0404 22:19:57.625357   37825 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0404 22:19:57.625367   37825 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0404 22:19:57.625370   37825 command_runner.go:130] > # Example:
	I0404 22:19:57.625375   37825 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0404 22:19:57.625381   37825 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0404 22:19:57.625389   37825 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0404 22:19:57.625402   37825 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0404 22:19:57.625406   37825 command_runner.go:130] > # cpuset = 0
	I0404 22:19:57.625410   37825 command_runner.go:130] > # cpushares = "0-1"
	I0404 22:19:57.625413   37825 command_runner.go:130] > # Where:
	I0404 22:19:57.625417   37825 command_runner.go:130] > # The workload name is workload-type.
	I0404 22:19:57.625427   37825 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0404 22:19:57.625432   37825 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0404 22:19:57.625438   37825 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0404 22:19:57.625447   37825 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0404 22:19:57.625468   37825 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0404 22:19:57.625473   37825 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0404 22:19:57.625479   37825 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0404 22:19:57.625486   37825 command_runner.go:130] > # Default value is set to true
	I0404 22:19:57.625490   37825 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0404 22:19:57.625498   37825 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0404 22:19:57.625503   37825 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0404 22:19:57.625509   37825 command_runner.go:130] > # Default value is set to 'false'
	I0404 22:19:57.625514   37825 command_runner.go:130] > # disable_hostport_mapping = false
	I0404 22:19:57.625523   37825 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0404 22:19:57.625526   37825 command_runner.go:130] > #
	I0404 22:19:57.625532   37825 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0404 22:19:57.625540   37825 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0404 22:19:57.625546   37825 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0404 22:19:57.625552   37825 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0404 22:19:57.625557   37825 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0404 22:19:57.625561   37825 command_runner.go:130] > [crio.image]
	I0404 22:19:57.625566   37825 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0404 22:19:57.625570   37825 command_runner.go:130] > # default_transport = "docker://"
	I0404 22:19:57.625576   37825 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0404 22:19:57.625581   37825 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0404 22:19:57.625585   37825 command_runner.go:130] > # global_auth_file = ""
	I0404 22:19:57.625590   37825 command_runner.go:130] > # The image used to instantiate infra containers.
	I0404 22:19:57.625594   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.625599   37825 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0404 22:19:57.625605   37825 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0404 22:19:57.625617   37825 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0404 22:19:57.625624   37825 command_runner.go:130] > # This option supports live configuration reload.
	I0404 22:19:57.625631   37825 command_runner.go:130] > # pause_image_auth_file = ""
	I0404 22:19:57.625641   37825 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0404 22:19:57.625651   37825 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0404 22:19:57.625657   37825 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0404 22:19:57.625665   37825 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0404 22:19:57.625669   37825 command_runner.go:130] > # pause_command = "/pause"
	I0404 22:19:57.625677   37825 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0404 22:19:57.625683   37825 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0404 22:19:57.625691   37825 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0404 22:19:57.625699   37825 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0404 22:19:57.625712   37825 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0404 22:19:57.625725   37825 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0404 22:19:57.625735   37825 command_runner.go:130] > # pinned_images = [
	I0404 22:19:57.625739   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625745   37825 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0404 22:19:57.625757   37825 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0404 22:19:57.625766   37825 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0404 22:19:57.625772   37825 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0404 22:19:57.625779   37825 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0404 22:19:57.625783   37825 command_runner.go:130] > # signature_policy = ""
	I0404 22:19:57.625791   37825 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0404 22:19:57.625797   37825 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0404 22:19:57.625805   37825 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0404 22:19:57.625811   37825 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0404 22:19:57.625825   37825 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0404 22:19:57.625835   37825 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0404 22:19:57.625845   37825 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0404 22:19:57.625858   37825 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0404 22:19:57.625868   37825 command_runner.go:130] > # changing them here.
	I0404 22:19:57.625876   37825 command_runner.go:130] > # insecure_registries = [
	I0404 22:19:57.625883   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625891   37825 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0404 22:19:57.625897   37825 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0404 22:19:57.625902   37825 command_runner.go:130] > # image_volumes = "mkdir"
	I0404 22:19:57.625912   37825 command_runner.go:130] > # Temporary directory to use for storing big files
	I0404 22:19:57.625918   37825 command_runner.go:130] > # big_files_temporary_dir = ""
	I0404 22:19:57.625924   37825 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0404 22:19:57.625928   37825 command_runner.go:130] > # CNI plugins.
	I0404 22:19:57.625932   37825 command_runner.go:130] > [crio.network]
	I0404 22:19:57.625937   37825 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0404 22:19:57.625943   37825 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0404 22:19:57.625948   37825 command_runner.go:130] > # cni_default_network = ""
	I0404 22:19:57.625955   37825 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0404 22:19:57.625959   37825 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0404 22:19:57.625967   37825 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0404 22:19:57.625973   37825 command_runner.go:130] > # plugin_dirs = [
	I0404 22:19:57.625979   37825 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0404 22:19:57.625982   37825 command_runner.go:130] > # ]
	I0404 22:19:57.625988   37825 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0404 22:19:57.625993   37825 command_runner.go:130] > [crio.metrics]
	I0404 22:19:57.625998   37825 command_runner.go:130] > # Globally enable or disable metrics support.
	I0404 22:19:57.626002   37825 command_runner.go:130] > enable_metrics = true
	I0404 22:19:57.626007   37825 command_runner.go:130] > # Specify enabled metrics collectors.
	I0404 22:19:57.626012   37825 command_runner.go:130] > # Per default all metrics are enabled.
	I0404 22:19:57.626018   37825 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0404 22:19:57.626024   37825 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0404 22:19:57.626033   37825 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0404 22:19:57.626037   37825 command_runner.go:130] > # metrics_collectors = [
	I0404 22:19:57.626045   37825 command_runner.go:130] > # 	"operations",
	I0404 22:19:57.626049   37825 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0404 22:19:57.626056   37825 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0404 22:19:57.626061   37825 command_runner.go:130] > # 	"operations_errors",
	I0404 22:19:57.626067   37825 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0404 22:19:57.626072   37825 command_runner.go:130] > # 	"image_pulls_by_name",
	I0404 22:19:57.626077   37825 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0404 22:19:57.626081   37825 command_runner.go:130] > # 	"image_pulls_failures",
	I0404 22:19:57.626088   37825 command_runner.go:130] > # 	"image_pulls_successes",
	I0404 22:19:57.626091   37825 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0404 22:19:57.626098   37825 command_runner.go:130] > # 	"image_layer_reuse",
	I0404 22:19:57.626105   37825 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0404 22:19:57.626153   37825 command_runner.go:130] > # 	"containers_oom_total",
	I0404 22:19:57.626163   37825 command_runner.go:130] > # 	"containers_oom",
	I0404 22:19:57.626167   37825 command_runner.go:130] > # 	"processes_defunct",
	I0404 22:19:57.626171   37825 command_runner.go:130] > # 	"operations_total",
	I0404 22:19:57.626176   37825 command_runner.go:130] > # 	"operations_latency_seconds",
	I0404 22:19:57.626182   37825 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0404 22:19:57.626187   37825 command_runner.go:130] > # 	"operations_errors_total",
	I0404 22:19:57.626193   37825 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0404 22:19:57.626198   37825 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0404 22:19:57.626204   37825 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0404 22:19:57.626208   37825 command_runner.go:130] > # 	"image_pulls_success_total",
	I0404 22:19:57.626218   37825 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0404 22:19:57.626222   37825 command_runner.go:130] > # 	"containers_oom_count_total",
	I0404 22:19:57.626226   37825 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0404 22:19:57.626230   37825 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0404 22:19:57.626234   37825 command_runner.go:130] > # ]
	I0404 22:19:57.626239   37825 command_runner.go:130] > # The port on which the metrics server will listen.
	I0404 22:19:57.626245   37825 command_runner.go:130] > # metrics_port = 9090
	I0404 22:19:57.626250   37825 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0404 22:19:57.626256   37825 command_runner.go:130] > # metrics_socket = ""
	I0404 22:19:57.626260   37825 command_runner.go:130] > # The certificate for the secure metrics server.
	I0404 22:19:57.626266   37825 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0404 22:19:57.626272   37825 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0404 22:19:57.626279   37825 command_runner.go:130] > # certificate on any modification event.
	I0404 22:19:57.626283   37825 command_runner.go:130] > # metrics_cert = ""
	I0404 22:19:57.626290   37825 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0404 22:19:57.626294   37825 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0404 22:19:57.626298   37825 command_runner.go:130] > # metrics_key = ""
	I0404 22:19:57.626305   37825 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0404 22:19:57.626309   37825 command_runner.go:130] > [crio.tracing]
	I0404 22:19:57.626315   37825 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0404 22:19:57.626322   37825 command_runner.go:130] > # enable_tracing = false
	I0404 22:19:57.626327   37825 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0404 22:19:57.626332   37825 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0404 22:19:57.626338   37825 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0404 22:19:57.626345   37825 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0404 22:19:57.626353   37825 command_runner.go:130] > # CRI-O NRI configuration.
	I0404 22:19:57.626359   37825 command_runner.go:130] > [crio.nri]
	I0404 22:19:57.626363   37825 command_runner.go:130] > # Globally enable or disable NRI.
	I0404 22:19:57.626367   37825 command_runner.go:130] > # enable_nri = false
	I0404 22:19:57.626371   37825 command_runner.go:130] > # NRI socket to listen on.
	I0404 22:19:57.626378   37825 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0404 22:19:57.626382   37825 command_runner.go:130] > # NRI plugin directory to use.
	I0404 22:19:57.626388   37825 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0404 22:19:57.626393   37825 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0404 22:19:57.626400   37825 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0404 22:19:57.626405   37825 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0404 22:19:57.626411   37825 command_runner.go:130] > # nri_disable_connections = false
	I0404 22:19:57.626416   37825 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0404 22:19:57.626423   37825 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0404 22:19:57.626428   37825 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0404 22:19:57.626434   37825 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0404 22:19:57.626440   37825 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0404 22:19:57.626444   37825 command_runner.go:130] > [crio.stats]
	I0404 22:19:57.626450   37825 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0404 22:19:57.626457   37825 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0404 22:19:57.626461   37825 command_runner.go:130] > # stats_collection_period = 0
	I0404 22:19:57.626915   37825 command_runner.go:130] ! time="2024-04-04 22:19:57.587462216Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0404 22:19:57.626941   37825 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0404 22:19:57.627094   37825 cni.go:84] Creating CNI manager for ""
	I0404 22:19:57.627109   37825 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0404 22:19:57.627118   37825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:19:57.627139   37825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-575162 NodeName:multinode-575162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:19:57.627275   37825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-575162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:19:57.627340   37825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:19:57.639202   37825 command_runner.go:130] > kubeadm
	I0404 22:19:57.639224   37825 command_runner.go:130] > kubectl
	I0404 22:19:57.639231   37825 command_runner.go:130] > kubelet
	I0404 22:19:57.641066   37825 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:19:57.641123   37825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:19:57.653092   37825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0404 22:19:57.673190   37825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:19:57.692809   37825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0404 22:19:57.712326   37825 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0404 22:19:57.716529   37825 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0404 22:19:57.716757   37825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:19:57.872016   37825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:19:57.888992   37825 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162 for IP: 192.168.39.203
	I0404 22:19:57.889017   37825 certs.go:194] generating shared ca certs ...
	I0404 22:19:57.889035   37825 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:19:57.889190   37825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:19:57.889226   37825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:19:57.889250   37825 certs.go:256] generating profile certs ...
	I0404 22:19:57.889335   37825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/client.key
	I0404 22:19:57.889393   37825 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key.777590d0
	I0404 22:19:57.889432   37825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key
	I0404 22:19:57.889443   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0404 22:19:57.889454   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0404 22:19:57.889466   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0404 22:19:57.889478   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0404 22:19:57.889488   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0404 22:19:57.889504   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0404 22:19:57.889516   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0404 22:19:57.889528   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0404 22:19:57.889577   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:19:57.889609   37825 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:19:57.889618   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:19:57.889640   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:19:57.889663   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:19:57.889683   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:19:57.889723   37825 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:19:57.889753   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:57.889771   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem -> /usr/share/ca-certificates/12554.pem
	I0404 22:19:57.889782   37825 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> /usr/share/ca-certificates/125542.pem
	I0404 22:19:57.890334   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:19:57.916230   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:19:57.941191   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:19:57.966631   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:19:57.991737   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:19:58.017186   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:19:58.041683   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:19:58.069009   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/multinode-575162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:19:58.094726   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:19:58.121695   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:19:58.147997   37825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:19:58.173393   37825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:19:58.191047   37825 ssh_runner.go:195] Run: openssl version
	I0404 22:19:58.197588   37825 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0404 22:19:58.197683   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:19:58.209539   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214523   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214694   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.214736   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:19:58.221213   37825 command_runner.go:130] > b5213941
	I0404 22:19:58.221372   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:19:58.231699   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:19:58.243073   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.247940   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.247968   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.248003   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:19:58.254006   37825 command_runner.go:130] > 51391683
	I0404 22:19:58.254082   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:19:58.263623   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:19:58.274356   37825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.278951   37825 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.278991   37825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.279021   37825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:19:58.284442   37825 command_runner.go:130] > 3ec20f2e
	I0404 22:19:58.284729   37825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:19:58.294274   37825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:19:58.298931   37825 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:19:58.298950   37825 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0404 22:19:58.298963   37825 command_runner.go:130] > Device: 253,1	Inode: 8386566     Links: 1
	I0404 22:19:58.298970   37825 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0404 22:19:58.298976   37825 command_runner.go:130] > Access: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298985   37825 command_runner.go:130] > Modify: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298991   37825 command_runner.go:130] > Change: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.298999   37825 command_runner.go:130] >  Birth: 2024-04-04 22:13:26.340612781 +0000
	I0404 22:19:58.299046   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:19:58.304642   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.304968   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:19:58.310605   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.310900   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:19:58.317327   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.317400   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:19:58.323074   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.323288   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:19:58.329536   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.329617   37825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:19:58.335847   37825 command_runner.go:130] > Certificate will not expire
	I0404 22:19:58.335923   37825 kubeadm.go:391] StartCluster: {Name:multinode-575162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-575162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:19:58.336034   37825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:19:58.336077   37825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:19:58.379782   37825 command_runner.go:130] > 2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9
	I0404 22:19:58.379813   37825 command_runner.go:130] > 83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06
	I0404 22:19:58.379821   37825 command_runner.go:130] > b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5
	I0404 22:19:58.379831   37825 command_runner.go:130] > ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f
	I0404 22:19:58.379840   37825 command_runner.go:130] > 1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b
	I0404 22:19:58.379854   37825 command_runner.go:130] > 54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c
	I0404 22:19:58.379863   37825 command_runner.go:130] > 37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e
	I0404 22:19:58.379877   37825 command_runner.go:130] > cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65
	I0404 22:19:58.379907   37825 cri.go:89] found id: "2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9"
	I0404 22:19:58.379918   37825 cri.go:89] found id: "83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06"
	I0404 22:19:58.379923   37825 cri.go:89] found id: "b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5"
	I0404 22:19:58.379928   37825 cri.go:89] found id: "ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f"
	I0404 22:19:58.379935   37825 cri.go:89] found id: "1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b"
	I0404 22:19:58.379939   37825 cri.go:89] found id: "54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c"
	I0404 22:19:58.379943   37825 cri.go:89] found id: "37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e"
	I0404 22:19:58.379950   37825 cri.go:89] found id: "cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65"
	I0404 22:19:58.379954   37825 cri.go:89] found id: ""
	I0404 22:19:58.380011   37825 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.478074610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269431478052175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a99f99b7-6beb-44b2-a9b7-7a66399293e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.478936846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f256330-8043-46c7-90eb-bc85987bef9e name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.478992595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f256330-8043-46c7-90eb-bc85987bef9e name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.479562956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f256330-8043-46c7-90eb-bc85987bef9e name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.523632916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc5a9e07-a58f-462f-8ae0-3e5a1d99e096 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.523711449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc5a9e07-a58f-462f-8ae0-3e5a1d99e096 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.524738268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8967e31-0dec-4daf-902a-9510188489dc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.525256081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269431525230155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8967e31-0dec-4daf-902a-9510188489dc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.525992227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=697953b1-2b1e-4d57-97c1-74d4dbc3e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.526072351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=697953b1-2b1e-4d57-97c1-74d4dbc3e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.526445968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=697953b1-2b1e-4d57-97c1-74d4dbc3e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.573542194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3726d247-640d-4091-8da5-6901a237ee3c name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.573618887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3726d247-640d-4091-8da5-6901a237ee3c name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.574709331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b1f6b8e-d191-49d5-beec-511afbe98d6f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.575274298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269431575249005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b1f6b8e-d191-49d5-beec-511afbe98d6f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.576014662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=732549d3-2d82-470e-a2ca-5697f18dc1a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.576441209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=732549d3-2d82-470e-a2ca-5697f18dc1a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.577400857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=732549d3-2d82-470e-a2ca-5697f18dc1a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.623903338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2127800b-c6ad-4e14-9b7c-5a264359006d name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.623983341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2127800b-c6ad-4e14-9b7c-5a264359006d name=/runtime.v1.RuntimeService/Version
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.625035730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92015abc-8b64-4172-bbdf-78d6130f7e66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.625846574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269431625817589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92015abc-8b64-4172-bbdf-78d6130f7e66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.626406865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ad77e30-0620-4eb8-8912-6629f83e3e67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.626461409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ad77e30-0620-4eb8-8912-6629f83e3e67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:23:51 multinode-575162 crio[2846]: time="2024-04-04 22:23:51.626794020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2daad0a4e008be6bcdf8ad3ad27913e33a0b3e3abeca7e69c0dfac4c0f826aeb,PodSandboxId:3c705ed4993268d84b25211407eaef83d7246fa6b828b661bcde74ea10b9ff84,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712269238311575143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7,PodSandboxId:95ef7610dcf7306d2c0dc723c5fadc5c57168d9be7413ebb22e8fc997f67386e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712269204684709447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d,PodSandboxId:bb9ba1144e3a2089980b5818d30d6279a6c9bca00616833aa21544980f4bb00b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712269204642023280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc,PodSandboxId:d3d5bc97297e0205e50e3eab9a5c65e665c2c0174d8c274771452fb39e7aad8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712269204576495111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-880c-933bbcf4179c,},Annotations:map[string
]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb80f60bf1cdf5d549638bfcbadb46678e82604ed36afb2f3fa93be211332074,PodSandboxId:a23f7dc5835a1666c8779faee115274315856a57602f3d1589daea917d9a0b07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269204524502229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.k
ubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c,PodSandboxId:f849c1f28f09efb03fc4c2c267c8933bba3ca2479c54a7fcf64ccc193d851b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712269200708596918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a4380c88d06ba,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183,PodSandboxId:c9c1209088bb4c336a1213b6f72e64ed6ce45744d5b0ce4c21d495844e4bfdbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712269200718618186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.ku
bernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d,PodSandboxId:6d32b8a2d36b077b5e0871f07fece56861ebb50586b44a24fe3bbc3586ead6fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712269200741187661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{io.kubernetes.container.hash: cf29fcb2,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3,PodSandboxId:422d92b789114d4126ddae2eb2431a27a2c4565faaf80b004167863ea70489f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712269200689785391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192456d1920b3744637a5d8edf816962c5352d2f7f1757176c8bfecd5bc5480e,PodSandboxId:c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712268888111609975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlm6j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b403c4d-20e6-4b64-ae52-fcc9ac940d7e,},Annotations:map[string]string{io.kubernetes.container.hash: c640ce0d,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dce4432373d8b835c96e460bb3b68c05172de1ecd87b2fa90d2f4bd63cc23d9,PodSandboxId:c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712268833141091942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ce752-ae9c-4d7b-b869-63ce1e8f94e9,},Annotations:map[string]string{io.kubernetes.container.hash: 53b9ebc9,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06,PodSandboxId:8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712268832100832940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5flx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47125dc9-91e8-4824-b956-06d1e759a21f,},Annotations:map[string]string{io.kubernetes.container.hash: a0727f06,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5,PodSandboxId:b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712268830713173484,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l9sdd,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d0074f1f-69d4-49ab-9e2f-10c97b91ae01,},Annotations:map[string]string{io.kubernetes.container.hash: d8762788,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f,PodSandboxId:e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712268830568151377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4qc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6efa678-d0b7-4708-88
0c-933bbcf4179c,},Annotations:map[string]string{io.kubernetes.container.hash: eab2351f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c,PodSandboxId:4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712268810551019927,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d53f9e041d32925a2c1c7a5f2bf7594
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b,PodSandboxId:3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712268810561936164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707915b69936f4e0289a
4380c88d06ba,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e,PodSandboxId:beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712268810440750222,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2657366d5a79ca39aad046bc2b34b2e9,},Annotations:map[string]string{
io.kubernetes.container.hash: cf29fcb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65,PodSandboxId:ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712268810438584839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-575162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65788567edb4a3228a58bce04f0fbc42,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ad77e30-0620-4eb8-8912-6629f83e3e67 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2daad0a4e008b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   3c705ed499326       busybox-7fdf7869d9-dlm6j
	d85cfd3d51d04       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   95ef7610dcf73       kindnet-l9sdd
	f95e6e6792b0f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   bb9ba1144e3a2       coredns-76f75df574-r5flx
	102e4a1df4286       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   d3d5bc97297e0       kube-proxy-p4qc2
	fb80f60bf1cdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   a23f7dc5835a1       storage-provisioner
	672566204aa04       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   6d32b8a2d36b0       etcd-multinode-575162
	c6ec7f749f010       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   c9c1209088bb4       kube-apiserver-multinode-575162
	659d28fd4ccb2       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   f849c1f28f09e       kube-controller-manager-multinode-575162
	fd4d0929884ad       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   422d92b789114       kube-scheduler-multinode-575162
	192456d1920b3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c7e895b75a604       busybox-7fdf7869d9-dlm6j
	2dce4432373d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   c28d0930476da       storage-provisioner
	83e49da2db9e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   8aaf4ecaf186e       coredns-76f75df574-r5flx
	b6effb9553a51       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   b8e602765fb09       kindnet-l9sdd
	ffdc3c748508c       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      10 minutes ago      Exited              kube-proxy                0                   e83602b28499f       kube-proxy-p4qc2
	1c8f7d8794514       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   3792f7ece0d72       kube-controller-manager-multinode-575162
	54ccdf173a397       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   4018c8dd6629e       kube-scheduler-multinode-575162
	37301234a6dc1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   beb7585b145cd       etcd-multinode-575162
	cdff1c4750bae       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   ceacf97c23d9f       kube-apiserver-multinode-575162
	
	
	==> coredns [83e49da2db9e5df2e686d0ace02d205c351c5ca2b150f13cfb946a05b2a41d06] <==
	[INFO] 10.244.1.2:54098 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968283s
	[INFO] 10.244.1.2:38677 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126522s
	[INFO] 10.244.1.2:45073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099451s
	[INFO] 10.244.1.2:38610 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001466786s
	[INFO] 10.244.1.2:36266 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007049s
	[INFO] 10.244.1.2:46397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098746s
	[INFO] 10.244.1.2:33139 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080132s
	[INFO] 10.244.0.3:38244 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009703s
	[INFO] 10.244.0.3:54175 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077883s
	[INFO] 10.244.0.3:33752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117215s
	[INFO] 10.244.0.3:52462 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032671s
	[INFO] 10.244.1.2:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162733s
	[INFO] 10.244.1.2:48042 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113087s
	[INFO] 10.244.1.2:58404 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067235s
	[INFO] 10.244.1.2:55519 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110653s
	[INFO] 10.244.0.3:34341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081425s
	[INFO] 10.244.0.3:50706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139737s
	[INFO] 10.244.0.3:34366 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065691s
	[INFO] 10.244.0.3:48500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150261s
	[INFO] 10.244.1.2:34154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155481s
	[INFO] 10.244.1.2:44155 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129107s
	[INFO] 10.244.1.2:37095 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131297s
	[INFO] 10.244.1.2:49878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f95e6e6792b0fc7710eeccb92e5b03a642410a229d740db2f27fc4bab45fe00d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35002 - 53422 "HINFO IN 994327420128257834.6641705586587126964. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018400575s
	
	
	==> describe nodes <==
	Name:               multinode-575162
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-575162
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=multinode-575162
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_13_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-575162
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:20:03 +0000   Thu, 04 Apr 2024 22:13:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-575162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa4e7d2217784c3bb6e858eb20908b44
	  System UUID:                aa4e7d22-1778-4c3b-b6e8-58eb20908b44
	  Boot ID:                    b1c84359-b966-4d9c-94e3-8e33fb243db7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-dlm6j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 coredns-76f75df574-r5flx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-575162                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-l9sdd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-575162             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-575162    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-p4qc2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-575162             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-575162 event: Registered Node multinode-575162 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-575162 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-575162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-575162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-575162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node multinode-575162 event: Registered Node multinode-575162 in Controller
	
	
	Name:               multinode-575162-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-575162-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=multinode-575162
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_04T22_20_46_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:20:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-575162-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:21:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 04 Apr 2024 22:21:15 +0000   Thu, 04 Apr 2024 22:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-575162-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa834ddf9dd9403b9314231d9a54ae9e
	  System UUID:                fa834ddf-9dd9-403b-9314-231d9a54ae9e
	  Boot ID:                    612ce0dd-6cab-46af-9ef6-e57ba44eca15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ldcpv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-z2j24               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m20s
	  kube-system                 kube-proxy-ggctb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  Starting                 9m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m20s (x2 over 9m20s)  kubelet          Node multinode-575162-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x2 over 9m20s)  kubelet          Node multinode-575162-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x2 over 9m20s)  kubelet          Node multinode-575162-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s                  kubelet          Node multinode-575162-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)    kubelet          Node multinode-575162-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)    kubelet          Node multinode-575162-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)    kubelet          Node multinode-575162-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-575162-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-575162-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.122243] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163521] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.133869] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.292735] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.550089] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.059177] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.396745] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.759266] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.547724] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.086571] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.585037] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.107006] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 4 22:14] kauditd_printk_skb: 82 callbacks suppressed
	[Apr 4 22:19] systemd-fstab-generator[2765]: Ignoring "noauto" option for root device
	[  +0.169541] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +0.188499] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.160724] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.310102] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +8.620326] systemd-fstab-generator[2929]: Ignoring "noauto" option for root device
	[  +0.080862] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.888984] systemd-fstab-generator[3053]: Ignoring "noauto" option for root device
	[Apr 4 22:20] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.532292] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.922388] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[ +18.368961] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [37301234a6dc124470b8e684c4ed1c35be3f58487d94be039052c4482191549e] <==
	{"level":"info","ts":"2024-04-04T22:13:30.828855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:13:30.82898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:13:30.829296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:13:30.831743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-04-04T22:13:30.833442Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:13:30.845274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:13:30.836011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T22:13:30.859815Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-04T22:14:31.38322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.77246ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888170719143125443 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-575162-m02.17c333775d3ac571\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-575162-m02.17c333775d3ac571\" value_size:642 lease:3888170719143124392 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T22:14:31.383867Z","caller":"traceutil/trace.go:171","msg":"trace[966313568] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"259.646999ms","start":"2024-04-04T22:14:31.124183Z","end":"2024-04-04T22:14:31.38383Z","steps":["trace[966313568] 'process raft request'  (duration: 96.725395ms)","trace[966313568] 'compare'  (duration: 161.528254ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:14:31.383929Z","caller":"traceutil/trace.go:171","msg":"trace[355165613] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"194.061141ms","start":"2024-04-04T22:14:31.189773Z","end":"2024-04-04T22:14:31.383834Z","steps":["trace[355165613] 'process raft request'  (duration: 193.801312ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:15:20.723538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.636952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888170719143125867 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-575162-m03.17c33382db734c7d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-575162-m03.17c33382db734c7d\" value_size:646 lease:3888170719143125508 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T22:15:20.724167Z","caller":"traceutil/trace.go:171","msg":"trace[2066032871] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"249.905095ms","start":"2024-04-04T22:15:20.474239Z","end":"2024-04-04T22:15:20.724144Z","steps":["trace[2066032871] 'process raft request'  (duration: 87.528968ms)","trace[2066032871] 'compare'  (duration: 161.343296ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:15:20.724479Z","caller":"traceutil/trace.go:171","msg":"trace[440202101] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"163.439918ms","start":"2024-04-04T22:15:20.561029Z","end":"2024-04-04T22:15:20.724469Z","steps":["trace[440202101] 'process raft request'  (duration: 162.874447ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:15:28.522781Z","caller":"traceutil/trace.go:171","msg":"trace[1892569076] transaction","detail":"{read_only:false; response_revision:665; number_of_response:1; }","duration":"107.569828ms","start":"2024-04-04T22:15:28.415188Z","end":"2024-04-04T22:15:28.522758Z","steps":["trace[1892569076] 'process raft request'  (duration: 107.445858ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:18:17.039758Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-04T22:18:17.039887Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-575162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-04-04T22:18:17.039992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.040084Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.076788Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-04T22:18:17.076866Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-04T22:18:17.078287Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-04-04T22:18:17.085784Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:18:17.085936Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:18:17.085975Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-575162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> etcd [672566204aa04e9c461881544e6c292dd956340e271cd8751bf69a2d5b32d49d] <==
	{"level":"info","ts":"2024-04-04T22:20:01.215741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:20:01.215751Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:20:01.218605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-04-04T22:20:01.218807Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-04-04T22:20:01.218971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:20:01.219023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:20:01.239969Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T22:20:01.240236Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T22:20:01.243855Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:20:01.244098Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-04-04T22:20:01.244117Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-04T22:20:02.258812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.258919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.259025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-04-04T22:20:02.259059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.259083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.25911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.259142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-04-04T22:20:02.265808Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-575162 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:20:02.265906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:20:02.267993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T22:20:02.279136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:20:02.280409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:20:02.28047Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:20:02.282222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	
	
	==> kernel <==
	 22:23:52 up 10 min,  0 users,  load average: 0.15, 0.20, 0.11
	Linux multinode-575162 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6effb9553a5145c148b2ce102df3abd7aabee0900e7c9f40a5f823ca47e9cf5] <==
	I0404 22:17:31.748443       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:17:41.753702       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:17:41.753740       1 main.go:227] handling current node
	I0404 22:17:41.753751       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:17:41.753757       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:17:41.753877       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:17:41.753906       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:17:51.767161       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:17:51.767209       1 main.go:227] handling current node
	I0404 22:17:51.767254       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:17:51.767264       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:17:51.767452       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:17:51.767484       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:18:01.780736       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:18:01.780786       1 main.go:227] handling current node
	I0404 22:18:01.780797       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:18:01.780802       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:18:01.780932       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:18:01.780960       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	I0404 22:18:11.795253       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:18:11.795413       1 main.go:227] handling current node
	I0404 22:18:11.795440       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:18:11.795460       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:18:11.795599       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0404 22:18:11.795619       1 main.go:250] Node multinode-575162-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d85cfd3d51d04791e19edca989e7df5f16c2eda3fc78a01862a80a629f9e55b7] <==
	I0404 22:22:45.854408       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:22:55.864660       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:22:55.864702       1 main.go:227] handling current node
	I0404 22:22:55.864714       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:22:55.864719       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:23:05.870672       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:23:05.873498       1 main.go:227] handling current node
	I0404 22:23:05.873618       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:23:05.873647       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:23:15.883893       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:23:15.883943       1 main.go:227] handling current node
	I0404 22:23:15.883956       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:23:15.883961       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:23:25.890068       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:23:25.890113       1 main.go:227] handling current node
	I0404 22:23:25.890124       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:23:25.890131       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:23:35.906212       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:23:35.906294       1 main.go:227] handling current node
	I0404 22:23:35.906367       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:23:35.906376       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	I0404 22:23:45.914296       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0404 22:23:45.914435       1 main.go:227] handling current node
	I0404 22:23:45.914446       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0404 22:23:45.914452       1 main.go:250] Node multinode-575162-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c6ec7f749f0106297ded5c43450bc3a6971f583d1352615bea10be145811b183] <==
	I0404 22:20:03.637622       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0404 22:20:03.637636       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0404 22:20:03.637647       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0404 22:20:03.754570       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0404 22:20:03.754839       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 22:20:03.756586       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 22:20:03.778964       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0404 22:20:03.800686       1 aggregator.go:165] initial CRD sync complete...
	I0404 22:20:03.800742       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 22:20:03.800749       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 22:20:03.800755       1 cache.go:39] Caches are synced for autoregister controller
	I0404 22:20:03.804612       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0404 22:20:03.805669       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 22:20:03.820265       1 shared_informer.go:318] Caches are synced for configmaps
	I0404 22:20:03.820882       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0404 22:20:03.820933       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0404 22:20:03.875393       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0404 22:20:04.625276       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 22:20:05.655038       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0404 22:20:05.821010       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0404 22:20:05.840438       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0404 22:20:05.939673       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 22:20:05.953191       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0404 22:20:16.629902       1 controller.go:624] quota admission added evaluator for: endpoints
	I0404 22:20:16.930927       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cdff1c4750bae8b42352bcba8df36f5545d68345fe7a2699c174a3b473845c65] <==
	W0404 22:18:17.063944       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.063980       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064009       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064037       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064067       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064096       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064130       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.064161       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067059       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067098       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067124       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067149       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067173       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067208       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067235       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067733       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067770       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067797       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067828       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067854       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067891       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.067954       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068089       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068100       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 22:18:17.068123       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1c8f7d8794514c802a729bead16a9047fc54cbdc3233c083f9e6dfb0251f562b] <==
	I0404 22:14:48.941693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.92921ms"
	I0404 22:14:48.941885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.92µs"
	I0404 22:15:20.727127       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:15:20.727613       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:15:20.748416       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.2.0/24"]
	I0404 22:15:20.772761       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pcc2s"
	I0404 22:15:20.772834       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tmn7c"
	I0404 22:15:24.712618       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-575162-m03"
	I0404 22:15:24.713023       1 event.go:376] "Event occurred" object="multinode-575162-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-575162-m03 event: Registered Node multinode-575162-m03 in Controller"
	I0404 22:15:30.352952       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:00.387032       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:01.537442       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:01.538787       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:16:01.553638       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.3.0/24"]
	I0404 22:16:11.150131       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:54.764681       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:16:54.765044       1 event.go:376] "Event occurred" object="multinode-575162-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-575162-m03 status is now: NodeNotReady"
	I0404 22:16:54.774496       1 event.go:376] "Event occurred" object="multinode-575162-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-575162-m02 status is now: NodeNotReady"
	I0404 22:16:54.782691       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-pcc2s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.793188       1 event.go:376] "Event occurred" object="kube-system/kindnet-z2j24" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.803577       1 event.go:376] "Event occurred" object="kube-system/kindnet-tmn7c" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.809983       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-ggctb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.823747       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-t8948" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:16:54.831505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.795662ms"
	I0404 22:16:54.831727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="81.294µs"
	
	
	==> kube-controller-manager [659d28fd4ccb2f659ff96b4bc13cc4d0623d7316ede252f0a086b84a03f26d7c] <==
	I0404 22:20:47.461073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="68.134µs"
	I0404 22:20:47.461682       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-t8948" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-t8948"
	I0404 22:20:54.626706       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:20:54.651827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="128.846µs"
	I0404 22:20:54.666773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="83.023µs"
	I0404 22:20:56.698552       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ldcpv" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ldcpv"
	I0404 22:20:57.551615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.103006ms"
	I0404 22:20:57.551901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="96.165µs"
	I0404 22:21:14.181621       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:15.293985       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-575162-m03\" does not exist"
	I0404 22:21:15.294296       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:15.318906       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-575162-m03" podCIDRs=["10.244.2.0/24"]
	I0404 22:21:24.370237       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:30.137725       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-575162-m02"
	I0404 22:21:31.716094       1 event.go:376] "Event occurred" object="multinode-575162-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-575162-m03 event: Removing Node multinode-575162-m03 from Controller"
	I0404 22:22:06.734656       1 event.go:376] "Event occurred" object="multinode-575162-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-575162-m02 status is now: NodeNotReady"
	I0404 22:22:06.747438       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-ggctb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:22:06.762827       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ldcpv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:22:06.779482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="16.719812ms"
	I0404 22:22:06.781548       1 event.go:376] "Event occurred" object="kube-system/kindnet-z2j24" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0404 22:22:06.781850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.694µs"
	I0404 22:22:16.687529       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-tmn7c"
	I0404 22:22:16.717642       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-tmn7c"
	I0404 22:22:16.717693       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-pcc2s"
	I0404 22:22:16.749042       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-pcc2s"
	
	
	==> kube-proxy [102e4a1df4286176ee1ed0031104fe277e57f243ef5e3d1787bbcd09c9597adc] <==
	I0404 22:20:04.895085       1 server_others.go:72] "Using iptables proxy"
	I0404 22:20:04.934736       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	I0404 22:20:04.998906       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:20:04.998963       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:20:04.998986       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:20:05.002622       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:20:05.003214       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:20:05.003256       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:20:05.006411       1 config.go:188] "Starting service config controller"
	I0404 22:20:05.006474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:20:05.006516       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:20:05.006543       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:20:05.007218       1 config.go:315] "Starting node config controller"
	I0404 22:20:05.007261       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:20:05.106999       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:20:05.107466       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:20:05.106875       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ffdc3c748508cbcdafae4fbfb3d6628d83a809e76cd2e71bff72fb7b5890ea0f] <==
	I0404 22:13:51.045552       1 server_others.go:72] "Using iptables proxy"
	I0404 22:13:51.072269       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	I0404 22:13:51.121028       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:13:51.121051       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:13:51.121064       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:13:51.125967       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:13:51.126259       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:13:51.126382       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:13:51.128010       1 config.go:188] "Starting service config controller"
	I0404 22:13:51.129354       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:13:51.128544       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:13:51.129420       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:13:51.128906       1 config.go:315] "Starting node config controller"
	I0404 22:13:51.129429       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:13:51.230529       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:13:51.230570       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:13:51.230589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54ccdf173a397e81fe2dd077edb82a74a4673225afa4ab5cbb89570837275c1c] <==
	E0404 22:13:33.325587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 22:13:33.326151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 22:13:34.141611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 22:13:34.141673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 22:13:34.147008       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 22:13:34.147082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 22:13:34.154618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 22:13:34.154667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 22:13:34.219735       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0404 22:13:34.219801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0404 22:13:34.259974       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 22:13:34.260109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 22:13:34.299470       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 22:13:34.299507       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:13:34.485640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 22:13:34.485710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 22:13:34.530647       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0404 22:13:34.530999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0404 22:13:34.551854       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 22:13:34.552256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0404 22:13:36.501406       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:18:17.048442       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0404 22:18:17.048540       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0404 22:18:17.057163       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0404 22:18:17.059665       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fd4d0929884ad44e185c7b8183b01e25e1552dfba18d1eb93636aa6a545fa7b3] <==
	I0404 22:20:01.938683       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:20:03.680051       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:20:03.680208       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:20:03.680291       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:20:03.680416       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:20:03.780927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0404 22:20:03.781055       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:20:03.785781       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:20:03.788407       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:20:03.788751       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:20:03.788430       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:20:03.889221       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 22:22:00 multinode-575162 kubelet[3060]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 22:22:00 multinode-575162 kubelet[3060]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.084875    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod707915b69936f4e0289a4380c88d06ba/crio-3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Error finding container 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Status 404 returned error can't find the container with id 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.085177    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0d53f9e041d32925a2c1c7a5f2bf7594/crio-4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Error finding container 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Status 404 returned error can't find the container with id 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.085685    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8b403c4d-20e6-4b64-ae52-fcc9ac940d7e/crio-c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Error finding container c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Status 404 returned error can't find the container with id c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.086136    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod47125dc9-91e8-4824-b956-06d1e759a21f/crio-8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Error finding container 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Status 404 returned error can't find the container with id 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.086464    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod65788567edb4a3228a58bce04f0fbc42/crio-ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Error finding container ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Status 404 returned error can't find the container with id ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.086810    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc6efa678-d0b7-4708-880c-933bbcf4179c/crio-e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Error finding container e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Status 404 returned error can't find the container with id e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.087115    3060 manager.go:1116] Failed to create existing container: /kubepods/podd0074f1f-69d4-49ab-9e2f-10c97b91ae01/crio-b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Error finding container b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Status 404 returned error can't find the container with id b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.087388    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2657366d5a79ca39aad046bc2b34b2e9/crio-beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Error finding container beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Status 404 returned error can't find the container with id beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282
	Apr 04 22:22:00 multinode-575162 kubelet[3060]: E0404 22:22:00.087642    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda92ce752-ae9c-4d7b-b869-63ce1e8f94e9/crio-c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Error finding container c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Status 404 returned error can't find the container with id c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.078085    3060 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 22:23:00 multinode-575162 kubelet[3060]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 22:23:00 multinode-575162 kubelet[3060]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 22:23:00 multinode-575162 kubelet[3060]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 22:23:00 multinode-575162 kubelet[3060]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.083741    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2657366d5a79ca39aad046bc2b34b2e9/crio-beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Error finding container beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282: Status 404 returned error can't find the container with id beb7585b145cd478cfb9377d8a21ae3c3693fc580dc658fa95cd13c16cc31282
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.084114    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8b403c4d-20e6-4b64-ae52-fcc9ac940d7e/crio-c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Error finding container c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5: Status 404 returned error can't find the container with id c7e895b75a604c55276f10e7a75d7f37d68283f5a6fbee77879bafc5a620fad5
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.084402    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod47125dc9-91e8-4824-b956-06d1e759a21f/crio-8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Error finding container 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171: Status 404 returned error can't find the container with id 8aaf4ecaf186e746459f057c690f8f6ab2e31108840287ef0cf5c57a9a683171
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.084571    3060 manager.go:1116] Failed to create existing container: /kubepods/podd0074f1f-69d4-49ab-9e2f-10c97b91ae01/crio-b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Error finding container b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee: Status 404 returned error can't find the container with id b8e602765fb09032e4df84cb2bade4d3391d0b38aeb618fc57dec2af7f7815ee
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.084829    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod707915b69936f4e0289a4380c88d06ba/crio-3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Error finding container 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5: Status 404 returned error can't find the container with id 3792f7ece0d7207059addd47388ca9fe6d9e2010064be40a77affbd402d522d5
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.085017    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod65788567edb4a3228a58bce04f0fbc42/crio-ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Error finding container ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8: Status 404 returned error can't find the container with id ceacf97c23d9f6bce862b64984c7f62854710a10573f58c61d8a0f7d68f918f8
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.085385    3060 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0d53f9e041d32925a2c1c7a5f2bf7594/crio-4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Error finding container 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f: Status 404 returned error can't find the container with id 4018c8dd6629e8809ca9ffc13c76ac815a85e7682efd923ad8cc63e9e23d1c8f
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.085621    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda92ce752-ae9c-4d7b-b869-63ce1e8f94e9/crio-c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Error finding container c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3: Status 404 returned error can't find the container with id c28d0930476dadbdbadd0be1d3856037679fa6e822702acb42cdb907828be2a3
	Apr 04 22:23:00 multinode-575162 kubelet[3060]: E0404 22:23:00.085903    3060 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc6efa678-d0b7-4708-880c-933bbcf4179c/crio-e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Error finding container e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3: Status 404 returned error can't find the container with id e83602b28499f60e90c590f3f3b7f98301b04d8e80872b9381288e8e883a63b3
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:23:51.167160   39408 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16143-5297/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-575162 -n multinode-575162
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-575162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.66s)

                                                
                                    
x
+
TestPreload (244.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-214349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0404 22:28:09.143102   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:28:50.479950   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-214349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m40.908303476s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-214349 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-214349 image pull gcr.io/k8s-minikube/busybox: (2.906070156s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-214349
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-214349: (7.344799858s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-214349 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-214349 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.821813562s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-214349 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-04 22:31:46.844796591 +0000 UTC m=+3762.707782293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-214349 -n test-preload-214349
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-214349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-214349 logs -n 25: (1.136908398s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162 sudo cat                                       | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162.txt                          |                      |         |                |                     |                     |
	| cp      | multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt                       | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m02:/home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt |                      |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n                                                                 | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | multinode-575162-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-575162 ssh -n multinode-575162-m02 sudo cat                                   | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	|         | /home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt                      |                      |         |                |                     |                     |
	| node    | multinode-575162 node stop m03                                                          | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:15 UTC |
	| node    | multinode-575162 node start                                                             | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:15 UTC | 04 Apr 24 22:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| stop    | -p multinode-575162                                                                     | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:16 UTC |                     |
	| start   | -p multinode-575162                                                                     | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:18 UTC | 04 Apr 24 22:21 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC |                     |
	| node    | multinode-575162 node delete                                                            | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC | 04 Apr 24 22:21 UTC |
	|         | m03                                                                                     |                      |         |                |                     |                     |
	| stop    | multinode-575162 stop                                                                   | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:21 UTC |                     |
	| start   | -p multinode-575162                                                                     | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:23 UTC | 04 Apr 24 22:26 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | list -p multinode-575162                                                                | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:26 UTC |                     |
	| start   | -p multinode-575162-m02                                                                 | multinode-575162-m02 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:26 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| start   | -p multinode-575162-m03                                                                 | multinode-575162-m03 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:26 UTC | 04 Apr 24 22:27 UTC |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | add -p multinode-575162                                                                 | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:27 UTC |                     |
	| delete  | -p multinode-575162-m03                                                                 | multinode-575162-m03 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:27 UTC | 04 Apr 24 22:27 UTC |
	| delete  | -p multinode-575162                                                                     | multinode-575162     | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:27 UTC | 04 Apr 24 22:27 UTC |
	| start   | -p test-preload-214349                                                                  | test-preload-214349  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:27 UTC | 04 Apr 24 22:30 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |                |                     |                     |
	| image   | test-preload-214349 image pull                                                          | test-preload-214349  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:30 UTC | 04 Apr 24 22:30 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |                |                     |                     |
	| stop    | -p test-preload-214349                                                                  | test-preload-214349  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:30 UTC | 04 Apr 24 22:30 UTC |
	| start   | -p test-preload-214349                                                                  | test-preload-214349  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:30 UTC | 04 Apr 24 22:31 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |                |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| image   | test-preload-214349 image list                                                          | test-preload-214349  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:31 UTC | 04 Apr 24 22:31 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:30:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:30:36.846356   41579 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:30:36.846476   41579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:30:36.846482   41579 out.go:304] Setting ErrFile to fd 2...
	I0404 22:30:36.846486   41579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:30:36.846652   41579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:30:36.847198   41579 out.go:298] Setting JSON to false
	I0404 22:30:36.848180   41579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4382,"bootTime":1712265455,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:30:36.848244   41579 start.go:139] virtualization: kvm guest
	I0404 22:30:36.850887   41579 out.go:177] * [test-preload-214349] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:30:36.853347   41579 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:30:36.853350   41579 notify.go:220] Checking for updates...
	I0404 22:30:36.856950   41579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:30:36.858573   41579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:30:36.860054   41579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:30:36.861624   41579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:30:36.863174   41579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:30:36.864950   41579 config.go:182] Loaded profile config "test-preload-214349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0404 22:30:36.865308   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:30:36.865359   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:30:36.880656   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0404 22:30:36.881071   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:30:36.881572   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:30:36.881587   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:30:36.881967   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:30:36.882152   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:30:36.884444   41579 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:30:36.886086   41579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:30:36.886500   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:30:36.886564   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:30:36.901021   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0404 22:30:36.901502   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:30:36.901992   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:30:36.902017   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:30:36.902432   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:30:36.902626   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:30:36.937512   41579 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:30:36.938931   41579 start.go:297] selected driver: kvm2
	I0404 22:30:36.938949   41579 start.go:901] validating driver "kvm2" against &{Name:test-preload-214349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-214349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:30:36.939067   41579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:30:36.939753   41579 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:30:36.939844   41579 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:30:36.954418   41579 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:30:36.954741   41579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:30:36.954821   41579 cni.go:84] Creating CNI manager for ""
	I0404 22:30:36.954838   41579 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:30:36.954903   41579 start.go:340] cluster config:
	{Name:test-preload-214349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-214349 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:30:36.955013   41579 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:30:36.956970   41579 out.go:177] * Starting "test-preload-214349" primary control-plane node in "test-preload-214349" cluster
	I0404 22:30:36.958397   41579 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0404 22:30:37.059927   41579 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0404 22:30:37.059964   41579 cache.go:56] Caching tarball of preloaded images
	I0404 22:30:37.060160   41579 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0404 22:30:37.062273   41579 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0404 22:30:37.063856   41579 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0404 22:30:37.165376   41579 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0404 22:30:49.462573   41579 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0404 22:30:49.462699   41579 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0404 22:30:50.305262   41579 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0404 22:30:50.305371   41579 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/config.json ...
	I0404 22:30:50.305615   41579 start.go:360] acquireMachinesLock for test-preload-214349: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:30:50.305676   41579 start.go:364] duration metric: took 38.432µs to acquireMachinesLock for "test-preload-214349"
	I0404 22:30:50.305689   41579 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:30:50.305695   41579 fix.go:54] fixHost starting: 
	I0404 22:30:50.306007   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:30:50.306046   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:30:50.320998   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0404 22:30:50.321443   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:30:50.321893   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:30:50.321908   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:30:50.322237   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:30:50.322458   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:30:50.322628   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetState
	I0404 22:30:50.324397   41579 fix.go:112] recreateIfNeeded on test-preload-214349: state=Stopped err=<nil>
	I0404 22:30:50.324419   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	W0404 22:30:50.324576   41579 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:30:50.330244   41579 out.go:177] * Restarting existing kvm2 VM for "test-preload-214349" ...
	I0404 22:30:50.334316   41579 main.go:141] libmachine: (test-preload-214349) Calling .Start
	I0404 22:30:50.334594   41579 main.go:141] libmachine: (test-preload-214349) Ensuring networks are active...
	I0404 22:30:50.335660   41579 main.go:141] libmachine: (test-preload-214349) Ensuring network default is active
	I0404 22:30:50.336108   41579 main.go:141] libmachine: (test-preload-214349) Ensuring network mk-test-preload-214349 is active
	I0404 22:30:50.336535   41579 main.go:141] libmachine: (test-preload-214349) Getting domain xml...
	I0404 22:30:50.337317   41579 main.go:141] libmachine: (test-preload-214349) Creating domain...
	I0404 22:30:51.568314   41579 main.go:141] libmachine: (test-preload-214349) Waiting to get IP...
	I0404 22:30:51.569361   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:51.569819   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:51.569872   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:51.569792   41647 retry.go:31] will retry after 202.229312ms: waiting for machine to come up
	I0404 22:30:51.773343   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:51.773798   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:51.773825   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:51.773758   41647 retry.go:31] will retry after 351.752543ms: waiting for machine to come up
	I0404 22:30:52.127360   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:52.127867   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:52.127896   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:52.127822   41647 retry.go:31] will retry after 365.070271ms: waiting for machine to come up
	I0404 22:30:52.494340   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:52.494753   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:52.494779   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:52.494727   41647 retry.go:31] will retry after 462.438605ms: waiting for machine to come up
	I0404 22:30:52.958396   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:52.958892   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:52.958924   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:52.958829   41647 retry.go:31] will retry after 684.111321ms: waiting for machine to come up
	I0404 22:30:53.644878   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:53.645309   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:53.645344   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:53.645239   41647 retry.go:31] will retry after 881.587051ms: waiting for machine to come up
	I0404 22:30:54.528412   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:54.528873   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:54.528913   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:54.528829   41647 retry.go:31] will retry after 1.110320007s: waiting for machine to come up
	I0404 22:30:55.640637   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:55.641158   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:55.641189   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:55.641093   41647 retry.go:31] will retry after 1.477928303s: waiting for machine to come up
	I0404 22:30:57.120897   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:57.121497   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:57.121523   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:57.121416   41647 retry.go:31] will retry after 1.715753655s: waiting for machine to come up
	I0404 22:30:58.838429   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:30:58.838867   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:30:58.838894   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:30:58.838831   41647 retry.go:31] will retry after 1.821940913s: waiting for machine to come up
	I0404 22:31:00.662448   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:00.662990   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:31:00.663021   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:31:00.662922   41647 retry.go:31] will retry after 2.15102063s: waiting for machine to come up
	I0404 22:31:02.815727   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:02.816137   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:31:02.816168   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:31:02.816078   41647 retry.go:31] will retry after 3.637082194s: waiting for machine to come up
	I0404 22:31:06.454882   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:06.455398   41579 main.go:141] libmachine: (test-preload-214349) DBG | unable to find current IP address of domain test-preload-214349 in network mk-test-preload-214349
	I0404 22:31:06.455425   41579 main.go:141] libmachine: (test-preload-214349) DBG | I0404 22:31:06.455344   41647 retry.go:31] will retry after 3.927685062s: waiting for machine to come up
	I0404 22:31:10.384478   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.384962   41579 main.go:141] libmachine: (test-preload-214349) Found IP for machine: 192.168.39.38
	I0404 22:31:10.384986   41579 main.go:141] libmachine: (test-preload-214349) Reserving static IP address...
	I0404 22:31:10.385003   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has current primary IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.385561   41579 main.go:141] libmachine: (test-preload-214349) Reserved static IP address: 192.168.39.38
	I0404 22:31:10.385579   41579 main.go:141] libmachine: (test-preload-214349) Waiting for SSH to be available...
	I0404 22:31:10.385599   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "test-preload-214349", mac: "52:54:00:82:0c:58", ip: "192.168.39.38"} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.385625   41579 main.go:141] libmachine: (test-preload-214349) DBG | skip adding static IP to network mk-test-preload-214349 - found existing host DHCP lease matching {name: "test-preload-214349", mac: "52:54:00:82:0c:58", ip: "192.168.39.38"}
	I0404 22:31:10.385640   41579 main.go:141] libmachine: (test-preload-214349) DBG | Getting to WaitForSSH function...
	I0404 22:31:10.388261   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.388599   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.388632   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.388758   41579 main.go:141] libmachine: (test-preload-214349) DBG | Using SSH client type: external
	I0404 22:31:10.388784   41579 main.go:141] libmachine: (test-preload-214349) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa (-rw-------)
	I0404 22:31:10.388815   41579 main.go:141] libmachine: (test-preload-214349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:31:10.388831   41579 main.go:141] libmachine: (test-preload-214349) DBG | About to run SSH command:
	I0404 22:31:10.388871   41579 main.go:141] libmachine: (test-preload-214349) DBG | exit 0
	I0404 22:31:10.516639   41579 main.go:141] libmachine: (test-preload-214349) DBG | SSH cmd err, output: <nil>: 
	I0404 22:31:10.517095   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetConfigRaw
	I0404 22:31:10.517718   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetIP
	I0404 22:31:10.520522   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.520974   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.521004   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.521300   41579 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/config.json ...
	I0404 22:31:10.521527   41579 machine.go:94] provisionDockerMachine start ...
	I0404 22:31:10.521550   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:10.521798   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:10.524358   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.524769   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.524814   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.524913   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:10.525069   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.525267   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.525481   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:10.525719   41579 main.go:141] libmachine: Using SSH client type: native
	I0404 22:31:10.525930   41579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0404 22:31:10.525946   41579 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:31:10.632945   41579 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:31:10.632978   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetMachineName
	I0404 22:31:10.633287   41579 buildroot.go:166] provisioning hostname "test-preload-214349"
	I0404 22:31:10.633317   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetMachineName
	I0404 22:31:10.633515   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:10.636223   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.636578   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.636608   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.636741   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:10.636933   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.637102   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.637268   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:10.637460   41579 main.go:141] libmachine: Using SSH client type: native
	I0404 22:31:10.637622   41579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0404 22:31:10.637637   41579 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-214349 && echo "test-preload-214349" | sudo tee /etc/hostname
	I0404 22:31:10.759090   41579 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-214349
	
	I0404 22:31:10.759121   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:10.762405   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.762801   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.762856   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.763073   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:10.763300   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.763576   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:10.763734   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:10.763897   41579 main.go:141] libmachine: Using SSH client type: native
	I0404 22:31:10.764147   41579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0404 22:31:10.764174   41579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-214349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-214349/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-214349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:31:10.882132   41579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:31:10.882158   41579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:31:10.882196   41579 buildroot.go:174] setting up certificates
	I0404 22:31:10.882204   41579 provision.go:84] configureAuth start
	I0404 22:31:10.882212   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetMachineName
	I0404 22:31:10.882565   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetIP
	I0404 22:31:10.885441   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.885848   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.885882   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.886084   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:10.888709   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.889043   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:10.889069   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:10.889280   41579 provision.go:143] copyHostCerts
	I0404 22:31:10.889338   41579 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:31:10.889356   41579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:31:10.889435   41579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:31:10.889574   41579 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:31:10.889584   41579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:31:10.889611   41579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:31:10.889676   41579 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:31:10.889684   41579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:31:10.889711   41579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:31:10.889778   41579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.test-preload-214349 san=[127.0.0.1 192.168.39.38 localhost minikube test-preload-214349]
	I0404 22:31:11.129503   41579 provision.go:177] copyRemoteCerts
	I0404 22:31:11.129557   41579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:31:11.129582   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.132646   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.133123   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.133146   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.133326   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.133585   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.133811   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.133943   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:11.219610   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:31:11.246793   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0404 22:31:11.272807   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:31:11.298408   41579 provision.go:87] duration metric: took 416.191967ms to configureAuth
	I0404 22:31:11.298435   41579 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:31:11.298604   41579 config.go:182] Loaded profile config "test-preload-214349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0404 22:31:11.298674   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.301382   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.301760   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.301792   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.301947   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.302190   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.302460   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.302625   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.302813   41579 main.go:141] libmachine: Using SSH client type: native
	I0404 22:31:11.302976   41579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0404 22:31:11.302995   41579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:31:11.603266   41579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:31:11.603317   41579 machine.go:97] duration metric: took 1.081775164s to provisionDockerMachine
	I0404 22:31:11.603344   41579 start.go:293] postStartSetup for "test-preload-214349" (driver="kvm2")
	I0404 22:31:11.603362   41579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:31:11.603387   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:11.603694   41579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:31:11.603720   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.606572   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.606938   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.606968   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.607137   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.607351   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.607583   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.607769   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:11.691683   41579 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:31:11.696266   41579 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:31:11.696290   41579 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:31:11.696359   41579 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:31:11.696458   41579 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:31:11.696576   41579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:31:11.706749   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:31:11.733325   41579 start.go:296] duration metric: took 129.960638ms for postStartSetup
	I0404 22:31:11.733377   41579 fix.go:56] duration metric: took 21.42768154s for fixHost
	I0404 22:31:11.733402   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.736375   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.736718   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.736751   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.736883   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.737090   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.737284   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.737453   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.737639   41579 main.go:141] libmachine: Using SSH client type: native
	I0404 22:31:11.737803   41579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0404 22:31:11.737813   41579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:31:11.845344   41579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712269871.795532653
	
	I0404 22:31:11.845382   41579 fix.go:216] guest clock: 1712269871.795532653
	I0404 22:31:11.845389   41579 fix.go:229] Guest: 2024-04-04 22:31:11.795532653 +0000 UTC Remote: 2024-04-04 22:31:11.733381917 +0000 UTC m=+34.933517357 (delta=62.150736ms)
	I0404 22:31:11.845411   41579 fix.go:200] guest clock delta is within tolerance: 62.150736ms
	I0404 22:31:11.845416   41579 start.go:83] releasing machines lock for "test-preload-214349", held for 21.539732983s
	I0404 22:31:11.845435   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:11.845769   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetIP
	I0404 22:31:11.849028   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.849658   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:11.850011   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.850033   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.850547   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:11.850714   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:11.850841   41579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:31:11.850893   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.850950   41579 ssh_runner.go:195] Run: cat /version.json
	I0404 22:31:11.850973   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:11.854182   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.854382   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.854598   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.854628   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.854793   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.854901   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:11.854925   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:11.855108   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:11.855120   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.855285   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:11.855287   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.855458   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:11.855458   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:11.855606   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:11.933892   41579 ssh_runner.go:195] Run: systemctl --version
	I0404 22:31:11.975931   41579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:31:12.126153   41579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:31:12.132791   41579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:31:12.132876   41579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:31:12.150004   41579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:31:12.150035   41579 start.go:494] detecting cgroup driver to use...
	I0404 22:31:12.150106   41579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:31:12.166924   41579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:31:12.181402   41579 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:31:12.181474   41579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:31:12.196339   41579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:31:12.211516   41579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:31:12.339631   41579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:31:12.519972   41579 docker.go:233] disabling docker service ...
	I0404 22:31:12.520050   41579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:31:12.535853   41579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:31:12.550517   41579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:31:12.675185   41579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:31:12.809540   41579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:31:12.824692   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:31:12.844669   41579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0404 22:31:12.844753   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.856931   41579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:31:12.857010   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.869008   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.881278   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.893416   41579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:31:12.905389   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.917212   41579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.936936   41579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:31:12.949523   41579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:31:12.960630   41579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:31:12.960683   41579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:31:12.975354   41579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:31:12.986549   41579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:31:13.113976   41579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:31:13.255065   41579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:31:13.255132   41579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:31:13.260213   41579 start.go:562] Will wait 60s for crictl version
	I0404 22:31:13.260291   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:13.264366   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:31:13.302813   41579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:31:13.302901   41579 ssh_runner.go:195] Run: crio --version
	I0404 22:31:13.332916   41579 ssh_runner.go:195] Run: crio --version
	I0404 22:31:13.366352   41579 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0404 22:31:13.368027   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetIP
	I0404 22:31:13.371272   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:13.371700   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:13.371723   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:13.371901   41579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:31:13.376468   41579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:31:13.390571   41579 kubeadm.go:877] updating cluster {Name:test-preload-214349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-214349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:31:13.390687   41579 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0404 22:31:13.390730   41579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:31:13.430400   41579 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0404 22:31:13.430460   41579 ssh_runner.go:195] Run: which lz4
	I0404 22:31:13.434566   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:31:13.438888   41579 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:31:13.438922   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0404 22:31:15.198845   41579 crio.go:462] duration metric: took 1.764305984s to copy over tarball
	I0404 22:31:15.198914   41579 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:31:17.683445   41579 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.484504469s)
	I0404 22:31:17.683471   41579 crio.go:469] duration metric: took 2.48459985s to extract the tarball
	I0404 22:31:17.683478   41579 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:31:17.725809   41579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:31:17.777938   41579 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0404 22:31:17.777960   41579 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:31:17.778027   41579 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0404 22:31:17.778041   41579 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0404 22:31:17.778059   41579 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0404 22:31:17.778050   41579 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0404 22:31:17.778068   41579 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0404 22:31:17.778123   41579 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0404 22:31:17.778027   41579 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:31:17.778084   41579 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0404 22:31:17.779658   41579 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0404 22:31:17.779686   41579 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0404 22:31:17.779722   41579 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:31:17.779662   41579 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0404 22:31:17.779852   41579 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0404 22:31:17.779895   41579 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0404 22:31:17.779924   41579 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0404 22:31:17.779991   41579 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0404 22:31:17.990368   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0404 22:31:18.035090   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0404 22:31:18.036057   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0404 22:31:18.039148   41579 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0404 22:31:18.039178   41579 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0404 22:31:18.039217   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.039665   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0404 22:31:18.051752   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0404 22:31:18.130224   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0404 22:31:18.132781   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0404 22:31:18.157958   41579 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0404 22:31:18.158011   41579 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0404 22:31:18.158037   41579 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0404 22:31:18.158056   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.158068   41579 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0404 22:31:18.158077   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0404 22:31:18.158116   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.160334   41579 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0404 22:31:18.160368   41579 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0404 22:31:18.160414   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.166838   41579 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0404 22:31:18.166874   41579 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0404 22:31:18.166915   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.239027   41579 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0404 22:31:18.239065   41579 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0404 22:31:18.239122   41579 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0404 22:31:18.239185   41579 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0404 22:31:18.239192   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0404 22:31:18.239224   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.239128   41579 ssh_runner.go:195] Run: which crictl
	I0404 22:31:18.242298   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0404 22:31:18.242401   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0404 22:31:18.242458   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0404 22:31:18.242497   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0404 22:31:18.242597   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0404 22:31:18.244562   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0404 22:31:18.326911   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0404 22:31:18.327005   41579 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0404 22:31:18.327011   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0404 22:31:18.365643   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0404 22:31:18.365770   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0404 22:31:18.373366   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0404 22:31:18.373451   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0404 22:31:18.373505   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0404 22:31:18.373521   41579 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0404 22:31:18.373549   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0404 22:31:18.373557   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0404 22:31:18.373579   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0404 22:31:18.373464   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0404 22:31:18.373550   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0404 22:31:18.373702   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0404 22:31:18.416526   41579 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0404 22:31:18.416594   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0404 22:31:18.416651   41579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0404 22:31:18.647472   41579 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:31:21.061036   41579 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.687465605s)
	I0404 22:31:21.061079   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0404 22:31:21.061107   41579 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0404 22:31:21.061116   41579 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.68754088s)
	I0404 22:31:21.061132   41579 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.687521974s)
	I0404 22:31:21.061143   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0404 22:31:21.061115   41579 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.687394152s)
	I0404 22:31:21.061154   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0404 22:31:21.061160   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0404 22:31:21.061176   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0404 22:31:21.061198   41579 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.64451963s)
	I0404 22:31:21.061220   41579 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0404 22:31:21.061244   41579 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.413746732s)
	I0404 22:31:21.825439   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0404 22:31:21.825488   41579 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0404 22:31:21.825530   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0404 22:31:21.971647   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0404 22:31:21.971690   41579 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0404 22:31:21.971738   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0404 22:31:24.221719   41579 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.2499556s)
	I0404 22:31:24.221752   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0404 22:31:24.221778   41579 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0404 22:31:24.221813   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0404 22:31:24.672744   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0404 22:31:24.672799   41579 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0404 22:31:24.672860   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0404 22:31:25.121055   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0404 22:31:25.121113   41579 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0404 22:31:25.121173   41579 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0404 22:31:25.867717   41579 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0404 22:31:25.867768   41579 cache_images.go:123] Successfully loaded all cached images
	I0404 22:31:25.867776   41579 cache_images.go:92] duration metric: took 8.089804177s to LoadCachedImages
	I0404 22:31:25.867792   41579 kubeadm.go:928] updating node { 192.168.39.38 8443 v1.24.4 crio true true} ...
	I0404 22:31:25.867909   41579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-214349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-214349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:31:25.867982   41579 ssh_runner.go:195] Run: crio config
	I0404 22:31:25.917636   41579 cni.go:84] Creating CNI manager for ""
	I0404 22:31:25.917661   41579 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:31:25.917672   41579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:31:25.917689   41579 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-214349 NodeName:test-preload-214349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:31:25.917817   41579 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-214349"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:31:25.917876   41579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0404 22:31:25.928650   41579 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:31:25.928711   41579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:31:25.938932   41579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0404 22:31:25.956828   41579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:31:25.974797   41579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0404 22:31:25.994047   41579 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0404 22:31:25.998331   41579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:31:26.011548   41579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:31:26.137669   41579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:31:26.156582   41579 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349 for IP: 192.168.39.38
	I0404 22:31:26.156606   41579 certs.go:194] generating shared ca certs ...
	I0404 22:31:26.156622   41579 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:31:26.156759   41579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:31:26.156802   41579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:31:26.156814   41579 certs.go:256] generating profile certs ...
	I0404 22:31:26.156920   41579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/client.key
	I0404 22:31:26.157001   41579 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/apiserver.key.89f7be3e
	I0404 22:31:26.157042   41579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/proxy-client.key
	I0404 22:31:26.157147   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:31:26.157174   41579 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:31:26.157184   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:31:26.157206   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:31:26.157229   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:31:26.157268   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:31:26.157309   41579 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:31:26.157947   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:31:26.202588   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:31:26.234009   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:31:26.261235   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:31:26.287315   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:31:26.334201   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:31:26.377171   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:31:26.404719   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:31:26.432508   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:31:26.458719   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:31:26.484060   41579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:31:26.510549   41579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:31:26.528340   41579 ssh_runner.go:195] Run: openssl version
	I0404 22:31:26.534439   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:31:26.545529   41579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:31:26.550395   41579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:31:26.550456   41579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:31:26.556400   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:31:26.567312   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:31:26.578533   41579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:31:26.583127   41579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:31:26.583172   41579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:31:26.589217   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:31:26.600488   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:31:26.611908   41579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:31:26.616738   41579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:31:26.616787   41579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:31:26.622859   41579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:31:26.634365   41579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:31:26.639369   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:31:26.645791   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:31:26.652042   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:31:26.658321   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:31:26.664478   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:31:26.670682   41579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:31:26.676833   41579 kubeadm.go:391] StartCluster: {Name:test-preload-214349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-214349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:31:26.676933   41579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:31:26.676990   41579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:31:26.715086   41579 cri.go:89] found id: ""
	I0404 22:31:26.715158   41579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:31:26.726020   41579 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:31:26.726046   41579 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:31:26.726052   41579 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:31:26.726103   41579 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:31:26.735991   41579 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:31:26.736469   41579 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-214349" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:31:26.736611   41579 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-214349" cluster setting kubeconfig missing "test-preload-214349" context setting]
	I0404 22:31:26.736865   41579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:31:26.737467   41579 kapi.go:59] client config for test-preload-214349: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0404 22:31:26.738056   41579 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:31:26.748091   41579 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.38
	I0404 22:31:26.748137   41579 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:31:26.748151   41579 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:31:26.748207   41579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:31:26.785432   41579 cri.go:89] found id: ""
	I0404 22:31:26.785523   41579 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:31:26.802126   41579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:31:26.811938   41579 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:31:26.811978   41579 kubeadm.go:156] found existing configuration files:
	
	I0404 22:31:26.812039   41579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:31:26.821141   41579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:31:26.821208   41579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:31:26.830759   41579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:31:26.839765   41579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:31:26.839822   41579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:31:26.849843   41579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:31:26.859784   41579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:31:26.859872   41579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:31:26.869483   41579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:31:26.878437   41579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:31:26.878491   41579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:31:26.888156   41579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:31:26.898464   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:26.986961   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:27.687151   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:27.956081   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:28.032142   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:28.173516   41579 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:31:28.173599   41579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:31:28.674642   41579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:31:29.173662   41579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:31:29.194450   41579 api_server.go:72] duration metric: took 1.020930688s to wait for apiserver process to appear ...
	I0404 22:31:29.194480   41579 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:31:29.194503   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:29.195089   41579 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0404 22:31:29.694642   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:33.491932   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:31:33.491967   41579 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:31:33.491985   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:33.549141   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:31:33.549179   41579 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:31:33.695415   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:33.708885   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:31:33.708916   41579 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:31:34.195485   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:34.203303   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:31:34.203328   41579 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:31:34.694901   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:34.705077   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0404 22:31:34.719756   41579 api_server.go:141] control plane version: v1.24.4
	I0404 22:31:34.719786   41579 api_server.go:131] duration metric: took 5.525299148s to wait for apiserver health ...
	I0404 22:31:34.719794   41579 cni.go:84] Creating CNI manager for ""
	I0404 22:31:34.719800   41579 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:31:34.721807   41579 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:31:34.723244   41579 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:31:34.748295   41579 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:31:34.773845   41579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:31:34.785082   41579 system_pods.go:59] 7 kube-system pods found
	I0404 22:31:34.785118   41579 system_pods.go:61] "coredns-6d4b75cb6d-dv84q" [7b7b9216-cdb7-4058-be32-47508983cb98] Running
	I0404 22:31:34.785123   41579 system_pods.go:61] "etcd-test-preload-214349" [ca82b1ce-8a5d-41d2-8558-cfe568db34c4] Running
	I0404 22:31:34.785128   41579 system_pods.go:61] "kube-apiserver-test-preload-214349" [96fb3342-efa4-4769-a75d-398908b3c8ed] Running
	I0404 22:31:34.785137   41579 system_pods.go:61] "kube-controller-manager-test-preload-214349" [67ef08e9-e3c9-46eb-8778-cb93a01f8810] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:31:34.785144   41579 system_pods.go:61] "kube-proxy-k9xlt" [5452f7d3-b135-483f-af00-2cf75e23dedf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:31:34.785153   41579 system_pods.go:61] "kube-scheduler-test-preload-214349" [1bafd113-b2cc-41f3-93aa-e51b3be48b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:31:34.785167   41579 system_pods.go:61] "storage-provisioner" [62e494b5-e25a-4f54-859f-ce54de8c305e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:31:34.785176   41579 system_pods.go:74] duration metric: took 11.307051ms to wait for pod list to return data ...
	I0404 22:31:34.785186   41579 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:31:34.791213   41579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:31:34.791241   41579 node_conditions.go:123] node cpu capacity is 2
	I0404 22:31:34.791254   41579 node_conditions.go:105] duration metric: took 6.062144ms to run NodePressure ...
	I0404 22:31:34.791272   41579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:31:35.083235   41579 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:31:35.092672   41579 kubeadm.go:733] kubelet initialised
	I0404 22:31:35.092693   41579 kubeadm.go:734] duration metric: took 9.436477ms waiting for restarted kubelet to initialise ...
	I0404 22:31:35.092700   41579 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:31:35.100018   41579 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.106005   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.106040   41579 pod_ready.go:81] duration metric: took 5.98835ms for pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.106053   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.106076   41579 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.111448   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "etcd-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.111479   41579 pod_ready.go:81] duration metric: took 5.383136ms for pod "etcd-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.111490   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "etcd-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.111498   41579 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.118998   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "kube-apiserver-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.119029   41579 pod_ready.go:81] duration metric: took 7.511472ms for pod "kube-apiserver-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.119040   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "kube-apiserver-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.119049   41579 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.181493   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.181522   41579 pod_ready.go:81] duration metric: took 62.455878ms for pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.181532   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.181539   41579 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k9xlt" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.581135   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "kube-proxy-k9xlt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.581164   41579 pod_ready.go:81] duration metric: took 399.614785ms for pod "kube-proxy-k9xlt" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.581177   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "kube-proxy-k9xlt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.581186   41579 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:35.977870   41579 pod_ready.go:97] node "test-preload-214349" hosting pod "kube-scheduler-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.977895   41579 pod_ready.go:81] duration metric: took 396.702348ms for pod "kube-scheduler-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	E0404 22:31:35.977904   41579 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-214349" hosting pod "kube-scheduler-test-preload-214349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:35.977917   41579 pod_ready.go:38] duration metric: took 885.208708ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:31:35.977937   41579 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:31:35.990323   41579 ops.go:34] apiserver oom_adj: -16
	I0404 22:31:35.990348   41579 kubeadm.go:591] duration metric: took 9.264289674s to restartPrimaryControlPlane
	I0404 22:31:35.990361   41579 kubeadm.go:393] duration metric: took 9.313534616s to StartCluster
	I0404 22:31:35.990379   41579 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:31:35.990474   41579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:31:35.991351   41579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:31:35.991655   41579 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:31:35.993521   41579 out.go:177] * Verifying Kubernetes components...
	I0404 22:31:35.991725   41579 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:31:35.991885   41579 config.go:182] Loaded profile config "test-preload-214349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0404 22:31:35.995078   41579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:31:35.995091   41579 addons.go:69] Setting default-storageclass=true in profile "test-preload-214349"
	I0404 22:31:35.995097   41579 addons.go:69] Setting storage-provisioner=true in profile "test-preload-214349"
	I0404 22:31:35.995129   41579 addons.go:234] Setting addon storage-provisioner=true in "test-preload-214349"
	W0404 22:31:35.995152   41579 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:31:35.995183   41579 host.go:66] Checking if "test-preload-214349" exists ...
	I0404 22:31:35.995134   41579 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-214349"
	I0404 22:31:35.995554   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:31:35.995599   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:31:35.995697   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:31:35.995742   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:31:36.010831   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0404 22:31:36.010885   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0404 22:31:36.011383   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:31:36.011382   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:31:36.011903   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:31:36.011923   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:31:36.012051   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:31:36.012075   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:31:36.012283   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:31:36.012365   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:31:36.012647   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetState
	I0404 22:31:36.012745   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:31:36.012804   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:31:36.015276   41579 kapi.go:59] client config for test-preload-214349: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/profiles/test-preload-214349/client.key", CAFile:"/home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5c6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0404 22:31:36.015613   41579 addons.go:234] Setting addon default-storageclass=true in "test-preload-214349"
	W0404 22:31:36.015633   41579 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:31:36.015661   41579 host.go:66] Checking if "test-preload-214349" exists ...
	I0404 22:31:36.016052   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:31:36.016098   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:31:36.028450   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0404 22:31:36.028911   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:31:36.029457   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:31:36.029483   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:31:36.029799   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:31:36.029987   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetState
	I0404 22:31:36.030542   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0404 22:31:36.030901   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:31:36.031358   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:31:36.031380   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:31:36.031733   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:31:36.031848   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:36.034186   41579 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:31:36.032271   41579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:31:36.035599   41579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:31:36.035709   41579 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:31:36.035729   41579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:31:36.035751   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:36.038587   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:36.039006   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:36.039058   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:36.039299   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:36.039473   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:36.039615   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:36.039754   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:36.051147   41579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I0404 22:31:36.051595   41579 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:31:36.052176   41579 main.go:141] libmachine: Using API Version  1
	I0404 22:31:36.052197   41579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:31:36.052496   41579 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:31:36.052702   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetState
	I0404 22:31:36.054597   41579 main.go:141] libmachine: (test-preload-214349) Calling .DriverName
	I0404 22:31:36.054891   41579 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:31:36.054907   41579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:31:36.054925   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHHostname
	I0404 22:31:36.057663   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:36.058078   41579 main.go:141] libmachine: (test-preload-214349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0c:58", ip: ""} in network mk-test-preload-214349: {Iface:virbr1 ExpiryTime:2024-04-04 23:31:01 +0000 UTC Type:0 Mac:52:54:00:82:0c:58 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-214349 Clientid:01:52:54:00:82:0c:58}
	I0404 22:31:36.058106   41579 main.go:141] libmachine: (test-preload-214349) DBG | domain test-preload-214349 has defined IP address 192.168.39.38 and MAC address 52:54:00:82:0c:58 in network mk-test-preload-214349
	I0404 22:31:36.058265   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHPort
	I0404 22:31:36.058446   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHKeyPath
	I0404 22:31:36.058624   41579 main.go:141] libmachine: (test-preload-214349) Calling .GetSSHUsername
	I0404 22:31:36.058782   41579 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/test-preload-214349/id_rsa Username:docker}
	I0404 22:31:36.199120   41579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:31:36.224246   41579 node_ready.go:35] waiting up to 6m0s for node "test-preload-214349" to be "Ready" ...
	I0404 22:31:36.287970   41579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:31:36.300038   41579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:31:37.180268   41579 main.go:141] libmachine: Making call to close driver server
	I0404 22:31:37.180288   41579 main.go:141] libmachine: (test-preload-214349) Calling .Close
	I0404 22:31:37.180565   41579 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:31:37.180592   41579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:31:37.180601   41579 main.go:141] libmachine: (test-preload-214349) DBG | Closing plugin on server side
	I0404 22:31:37.180605   41579 main.go:141] libmachine: Making call to close driver server
	I0404 22:31:37.180655   41579 main.go:141] libmachine: (test-preload-214349) Calling .Close
	I0404 22:31:37.180933   41579 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:31:37.180953   41579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:31:37.180965   41579 main.go:141] libmachine: (test-preload-214349) DBG | Closing plugin on server side
	I0404 22:31:37.186650   41579 main.go:141] libmachine: Making call to close driver server
	I0404 22:31:37.186670   41579 main.go:141] libmachine: (test-preload-214349) Calling .Close
	I0404 22:31:37.186925   41579 main.go:141] libmachine: (test-preload-214349) DBG | Closing plugin on server side
	I0404 22:31:37.186957   41579 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:31:37.186971   41579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:31:37.191342   41579 main.go:141] libmachine: Making call to close driver server
	I0404 22:31:37.191360   41579 main.go:141] libmachine: (test-preload-214349) Calling .Close
	I0404 22:31:37.191594   41579 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:31:37.191606   41579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:31:37.191616   41579 main.go:141] libmachine: Making call to close driver server
	I0404 22:31:37.191624   41579 main.go:141] libmachine: (test-preload-214349) Calling .Close
	I0404 22:31:37.191637   41579 main.go:141] libmachine: (test-preload-214349) DBG | Closing plugin on server side
	I0404 22:31:37.191832   41579 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:31:37.191846   41579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:31:37.193876   41579 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0404 22:31:37.195250   41579 addons.go:505] duration metric: took 1.203538212s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0404 22:31:38.229408   41579 node_ready.go:53] node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:40.728241   41579 node_ready.go:53] node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:42.728535   41579 node_ready.go:53] node "test-preload-214349" has status "Ready":"False"
	I0404 22:31:43.729320   41579 node_ready.go:49] node "test-preload-214349" has status "Ready":"True"
	I0404 22:31:43.729347   41579 node_ready.go:38] duration metric: took 7.505067982s for node "test-preload-214349" to be "Ready" ...
	I0404 22:31:43.729356   41579 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:31:43.738731   41579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:44.745641   41579 pod_ready.go:92] pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:44.745668   41579 pod_ready.go:81] duration metric: took 1.006903232s for pod "coredns-6d4b75cb6d-dv84q" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:44.745677   41579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.251567   41579 pod_ready.go:92] pod "etcd-test-preload-214349" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:45.251588   41579 pod_ready.go:81] duration metric: took 505.905107ms for pod "etcd-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.251597   41579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.255884   41579 pod_ready.go:92] pod "kube-apiserver-test-preload-214349" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:45.255902   41579 pod_ready.go:81] duration metric: took 4.298461ms for pod "kube-apiserver-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.255910   41579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.259860   41579 pod_ready.go:92] pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:45.259878   41579 pod_ready.go:81] duration metric: took 3.9623ms for pod "kube-controller-manager-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.259886   41579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k9xlt" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.330016   41579 pod_ready.go:92] pod "kube-proxy-k9xlt" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:45.330039   41579 pod_ready.go:81] duration metric: took 70.146159ms for pod "kube-proxy-k9xlt" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.330052   41579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.728088   41579 pod_ready.go:92] pod "kube-scheduler-test-preload-214349" in "kube-system" namespace has status "Ready":"True"
	I0404 22:31:45.728111   41579 pod_ready.go:81] duration metric: took 398.051622ms for pod "kube-scheduler-test-preload-214349" in "kube-system" namespace to be "Ready" ...
	I0404 22:31:45.728142   41579 pod_ready.go:38] duration metric: took 1.998776297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:31:45.728158   41579 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:31:45.728205   41579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:31:45.745415   41579 api_server.go:72] duration metric: took 9.753723157s to wait for apiserver process to appear ...
	I0404 22:31:45.745440   41579 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:31:45.745473   41579 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0404 22:31:45.750821   41579 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0404 22:31:45.751675   41579 api_server.go:141] control plane version: v1.24.4
	I0404 22:31:45.751695   41579 api_server.go:131] duration metric: took 6.247727ms to wait for apiserver health ...
	I0404 22:31:45.751704   41579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:31:45.931084   41579 system_pods.go:59] 7 kube-system pods found
	I0404 22:31:45.931111   41579 system_pods.go:61] "coredns-6d4b75cb6d-dv84q" [7b7b9216-cdb7-4058-be32-47508983cb98] Running
	I0404 22:31:45.931119   41579 system_pods.go:61] "etcd-test-preload-214349" [ca82b1ce-8a5d-41d2-8558-cfe568db34c4] Running
	I0404 22:31:45.931123   41579 system_pods.go:61] "kube-apiserver-test-preload-214349" [96fb3342-efa4-4769-a75d-398908b3c8ed] Running
	I0404 22:31:45.931128   41579 system_pods.go:61] "kube-controller-manager-test-preload-214349" [67ef08e9-e3c9-46eb-8778-cb93a01f8810] Running
	I0404 22:31:45.931133   41579 system_pods.go:61] "kube-proxy-k9xlt" [5452f7d3-b135-483f-af00-2cf75e23dedf] Running
	I0404 22:31:45.931138   41579 system_pods.go:61] "kube-scheduler-test-preload-214349" [1bafd113-b2cc-41f3-93aa-e51b3be48b17] Running
	I0404 22:31:45.931142   41579 system_pods.go:61] "storage-provisioner" [62e494b5-e25a-4f54-859f-ce54de8c305e] Running
	I0404 22:31:45.931148   41579 system_pods.go:74] duration metric: took 179.437978ms to wait for pod list to return data ...
	I0404 22:31:45.931159   41579 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:31:46.127809   41579 default_sa.go:45] found service account: "default"
	I0404 22:31:46.127841   41579 default_sa.go:55] duration metric: took 196.674543ms for default service account to be created ...
	I0404 22:31:46.127856   41579 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:31:46.330347   41579 system_pods.go:86] 7 kube-system pods found
	I0404 22:31:46.330373   41579 system_pods.go:89] "coredns-6d4b75cb6d-dv84q" [7b7b9216-cdb7-4058-be32-47508983cb98] Running
	I0404 22:31:46.330379   41579 system_pods.go:89] "etcd-test-preload-214349" [ca82b1ce-8a5d-41d2-8558-cfe568db34c4] Running
	I0404 22:31:46.330383   41579 system_pods.go:89] "kube-apiserver-test-preload-214349" [96fb3342-efa4-4769-a75d-398908b3c8ed] Running
	I0404 22:31:46.330386   41579 system_pods.go:89] "kube-controller-manager-test-preload-214349" [67ef08e9-e3c9-46eb-8778-cb93a01f8810] Running
	I0404 22:31:46.330391   41579 system_pods.go:89] "kube-proxy-k9xlt" [5452f7d3-b135-483f-af00-2cf75e23dedf] Running
	I0404 22:31:46.330395   41579 system_pods.go:89] "kube-scheduler-test-preload-214349" [1bafd113-b2cc-41f3-93aa-e51b3be48b17] Running
	I0404 22:31:46.330398   41579 system_pods.go:89] "storage-provisioner" [62e494b5-e25a-4f54-859f-ce54de8c305e] Running
	I0404 22:31:46.330404   41579 system_pods.go:126] duration metric: took 202.542634ms to wait for k8s-apps to be running ...
	I0404 22:31:46.330410   41579 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:31:46.330462   41579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:31:46.347160   41579 system_svc.go:56] duration metric: took 16.743286ms WaitForService to wait for kubelet
	I0404 22:31:46.347193   41579 kubeadm.go:576] duration metric: took 10.355504459s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:31:46.347211   41579 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:31:46.529336   41579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:31:46.529360   41579 node_conditions.go:123] node cpu capacity is 2
	I0404 22:31:46.529370   41579 node_conditions.go:105] duration metric: took 182.154333ms to run NodePressure ...
	I0404 22:31:46.529380   41579 start.go:240] waiting for startup goroutines ...
	I0404 22:31:46.529387   41579 start.go:245] waiting for cluster config update ...
	I0404 22:31:46.529398   41579 start.go:254] writing updated cluster config ...
	I0404 22:31:46.529650   41579 ssh_runner.go:195] Run: rm -f paused
	I0404 22:31:46.579301   41579 start.go:600] kubectl: 1.29.3, cluster: 1.24.4 (minor skew: 5)
	I0404 22:31:46.581514   41579 out.go:177] 
	W0404 22:31:46.582957   41579 out.go:239] ! /usr/local/bin/kubectl is version 1.29.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0404 22:31:46.584264   41579 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0404 22:31:46.585592   41579 out.go:177] * Done! kubectl is now configured to use "test-preload-214349" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.518135491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269907518104467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c35dd530-c5e8-4c48-b178-b1e7b95cde05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.519283077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e20d9ea-6c64-4b0c-942e-c5e2f228437a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.519341140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e20d9ea-6c64-4b0c-942e-c5e2f228437a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.519528439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d8870920be79bab93604d91a272d58aee4d599869b6a65e9f7d1c9a835ebdba,PodSandboxId:b07a98f9f4a8dc87b16f7af5bf8c550c3f0d9720addc72c1e177b22babdd05c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1712269902291792174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dv84q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7b9216-cdb7-4058-be32-47508983cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d2e48706,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e555f344a3221212f77e1d27b31ff176074317f6a1bf2d600de8388b40e84a1,PodSandboxId:3f15b8b7c75628a8ac62db4dc11cd658c52b08112db7e0c25449c65d0a7d9e77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269895124337906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 62e494b5-e25a-4f54-859f-ce54de8c305e,},Annotations:map[string]string{io.kubernetes.container.hash: d20c213e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8c37bfb3894b6fa82cf957deccc752add2627d681df4b2612c35c335e6d12,PodSandboxId:fc99bfc6d28f31a77f93c75b8e6b8f297f682bca50cf7a3485643db94062da2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1712269894817380831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k9xlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
52f7d3-b135-483f-af00-2cf75e23dedf,},Annotations:map[string]string{io.kubernetes.container.hash: a17fb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a3b9b0e63dc340420679fb7b5110c975d1f853e9fb495fc9c01fe496d3b870,PodSandboxId:ccb015e9b0a25cacacadeb245bee88706413e27014d3d8be1f863d84165238b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1712269888855791700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a14eeb4ad684e30487c3d9543021f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5dcf51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca79d149dd1d41393b767ce23186b943d2b00c8575d57128b1f4cdb0786e40,PodSandboxId:85d2717ead51841120981eb13b702bbbeec780f088cb7568078c61de854ec673,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1712269888859618259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e982cadecd1a0a9f4d6d4161
ae98f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc48435cc70ffbede67500f9ccfef3b10608295c4043f70d8f86d4728d35aa8,PodSandboxId:994913d9f463f44e16150009f6d2a61fcf1a64861ad286910982628cede8bc22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1712269888841119948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317f492fb332f9504363238eb79be3c9,},An
notations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd3ccf655da30c8ebdee6736c6e0c69e28c50ac21441b6acbe5f3e8225bd990,PodSandboxId:23136094776ed6e7c39429dd02b88dc33830845d0ab46e44435d2f5bec5e6d01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1712269888745907340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadb6662aa31411f2df65356ce52020b,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e20d9ea-6c64-4b0c-942e-c5e2f228437a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.557379005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84fbc2c4-bf2d-4042-979c-25a5fa60dc69 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.557454963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84fbc2c4-bf2d-4042-979c-25a5fa60dc69 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.558375058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98ff6390-7c94-4fa8-9a5e-c8cb619aa108 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.558788331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269907558765881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98ff6390-7c94-4fa8-9a5e-c8cb619aa108 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.559306518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1beff5a-dcbb-4807-9e5b-e94a291b24b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.559355406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1beff5a-dcbb-4807-9e5b-e94a291b24b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.559531559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d8870920be79bab93604d91a272d58aee4d599869b6a65e9f7d1c9a835ebdba,PodSandboxId:b07a98f9f4a8dc87b16f7af5bf8c550c3f0d9720addc72c1e177b22babdd05c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1712269902291792174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dv84q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7b9216-cdb7-4058-be32-47508983cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d2e48706,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e555f344a3221212f77e1d27b31ff176074317f6a1bf2d600de8388b40e84a1,PodSandboxId:3f15b8b7c75628a8ac62db4dc11cd658c52b08112db7e0c25449c65d0a7d9e77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269895124337906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 62e494b5-e25a-4f54-859f-ce54de8c305e,},Annotations:map[string]string{io.kubernetes.container.hash: d20c213e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8c37bfb3894b6fa82cf957deccc752add2627d681df4b2612c35c335e6d12,PodSandboxId:fc99bfc6d28f31a77f93c75b8e6b8f297f682bca50cf7a3485643db94062da2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1712269894817380831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k9xlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
52f7d3-b135-483f-af00-2cf75e23dedf,},Annotations:map[string]string{io.kubernetes.container.hash: a17fb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a3b9b0e63dc340420679fb7b5110c975d1f853e9fb495fc9c01fe496d3b870,PodSandboxId:ccb015e9b0a25cacacadeb245bee88706413e27014d3d8be1f863d84165238b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1712269888855791700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a14eeb4ad684e30487c3d9543021f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5dcf51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca79d149dd1d41393b767ce23186b943d2b00c8575d57128b1f4cdb0786e40,PodSandboxId:85d2717ead51841120981eb13b702bbbeec780f088cb7568078c61de854ec673,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1712269888859618259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e982cadecd1a0a9f4d6d4161
ae98f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc48435cc70ffbede67500f9ccfef3b10608295c4043f70d8f86d4728d35aa8,PodSandboxId:994913d9f463f44e16150009f6d2a61fcf1a64861ad286910982628cede8bc22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1712269888841119948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317f492fb332f9504363238eb79be3c9,},An
notations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd3ccf655da30c8ebdee6736c6e0c69e28c50ac21441b6acbe5f3e8225bd990,PodSandboxId:23136094776ed6e7c39429dd02b88dc33830845d0ab46e44435d2f5bec5e6d01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1712269888745907340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadb6662aa31411f2df65356ce52020b,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1beff5a-dcbb-4807-9e5b-e94a291b24b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.597816920Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f09c9444-7dd4-437e-a29a-47f9b467af88 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.597890977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f09c9444-7dd4-437e-a29a-47f9b467af88 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.598913300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81b8371c-321e-4eca-b386-5607591d0ceb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.599423130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269907599397717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81b8371c-321e-4eca-b386-5607591d0ceb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.599928178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5ba92a9-2ba9-4256-a873-be6671b7d5f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.599974967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5ba92a9-2ba9-4256-a873-be6671b7d5f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.600537297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d8870920be79bab93604d91a272d58aee4d599869b6a65e9f7d1c9a835ebdba,PodSandboxId:b07a98f9f4a8dc87b16f7af5bf8c550c3f0d9720addc72c1e177b22babdd05c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1712269902291792174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dv84q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7b9216-cdb7-4058-be32-47508983cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d2e48706,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e555f344a3221212f77e1d27b31ff176074317f6a1bf2d600de8388b40e84a1,PodSandboxId:3f15b8b7c75628a8ac62db4dc11cd658c52b08112db7e0c25449c65d0a7d9e77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269895124337906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 62e494b5-e25a-4f54-859f-ce54de8c305e,},Annotations:map[string]string{io.kubernetes.container.hash: d20c213e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8c37bfb3894b6fa82cf957deccc752add2627d681df4b2612c35c335e6d12,PodSandboxId:fc99bfc6d28f31a77f93c75b8e6b8f297f682bca50cf7a3485643db94062da2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1712269894817380831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k9xlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
52f7d3-b135-483f-af00-2cf75e23dedf,},Annotations:map[string]string{io.kubernetes.container.hash: a17fb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a3b9b0e63dc340420679fb7b5110c975d1f853e9fb495fc9c01fe496d3b870,PodSandboxId:ccb015e9b0a25cacacadeb245bee88706413e27014d3d8be1f863d84165238b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1712269888855791700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a14eeb4ad684e30487c3d9543021f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5dcf51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca79d149dd1d41393b767ce23186b943d2b00c8575d57128b1f4cdb0786e40,PodSandboxId:85d2717ead51841120981eb13b702bbbeec780f088cb7568078c61de854ec673,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1712269888859618259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e982cadecd1a0a9f4d6d4161
ae98f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc48435cc70ffbede67500f9ccfef3b10608295c4043f70d8f86d4728d35aa8,PodSandboxId:994913d9f463f44e16150009f6d2a61fcf1a64861ad286910982628cede8bc22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1712269888841119948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317f492fb332f9504363238eb79be3c9,},An
notations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd3ccf655da30c8ebdee6736c6e0c69e28c50ac21441b6acbe5f3e8225bd990,PodSandboxId:23136094776ed6e7c39429dd02b88dc33830845d0ab46e44435d2f5bec5e6d01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1712269888745907340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadb6662aa31411f2df65356ce52020b,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5ba92a9-2ba9-4256-a873-be6671b7d5f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.637371849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2f54356-1c85-4043-9563-81a9629b1d32 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.637447228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2f54356-1c85-4043-9563-81a9629b1d32 name=/runtime.v1.RuntimeService/Version
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.638649136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=028833c2-9d48-45ed-b848-2ff30c611e5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.639360747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712269907639332316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=028833c2-9d48-45ed-b848-2ff30c611e5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.639854152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3a3717c-ff14-49e2-b0cb-3ddf3b24de66 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.639903781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3a3717c-ff14-49e2-b0cb-3ddf3b24de66 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:31:47 test-preload-214349 crio[682]: time="2024-04-04 22:31:47.640133874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d8870920be79bab93604d91a272d58aee4d599869b6a65e9f7d1c9a835ebdba,PodSandboxId:b07a98f9f4a8dc87b16f7af5bf8c550c3f0d9720addc72c1e177b22babdd05c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1712269902291792174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dv84q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7b9216-cdb7-4058-be32-47508983cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d2e48706,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e555f344a3221212f77e1d27b31ff176074317f6a1bf2d600de8388b40e84a1,PodSandboxId:3f15b8b7c75628a8ac62db4dc11cd658c52b08112db7e0c25449c65d0a7d9e77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712269895124337906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 62e494b5-e25a-4f54-859f-ce54de8c305e,},Annotations:map[string]string{io.kubernetes.container.hash: d20c213e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8c37bfb3894b6fa82cf957deccc752add2627d681df4b2612c35c335e6d12,PodSandboxId:fc99bfc6d28f31a77f93c75b8e6b8f297f682bca50cf7a3485643db94062da2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1712269894817380831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k9xlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
52f7d3-b135-483f-af00-2cf75e23dedf,},Annotations:map[string]string{io.kubernetes.container.hash: a17fb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a3b9b0e63dc340420679fb7b5110c975d1f853e9fb495fc9c01fe496d3b870,PodSandboxId:ccb015e9b0a25cacacadeb245bee88706413e27014d3d8be1f863d84165238b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1712269888855791700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a14eeb4ad684e30487c3d9543021f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5dcf51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cca79d149dd1d41393b767ce23186b943d2b00c8575d57128b1f4cdb0786e40,PodSandboxId:85d2717ead51841120981eb13b702bbbeec780f088cb7568078c61de854ec673,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1712269888859618259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e982cadecd1a0a9f4d6d4161
ae98f6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc48435cc70ffbede67500f9ccfef3b10608295c4043f70d8f86d4728d35aa8,PodSandboxId:994913d9f463f44e16150009f6d2a61fcf1a64861ad286910982628cede8bc22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1712269888841119948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317f492fb332f9504363238eb79be3c9,},An
notations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd3ccf655da30c8ebdee6736c6e0c69e28c50ac21441b6acbe5f3e8225bd990,PodSandboxId:23136094776ed6e7c39429dd02b88dc33830845d0ab46e44435d2f5bec5e6d01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1712269888745907340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-214349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadb6662aa31411f2df65356ce52020b,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3a3717c-ff14-49e2-b0cb-3ddf3b24de66 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d8870920be79       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   b07a98f9f4a8d       coredns-6d4b75cb6d-dv84q
	7e555f344a322       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   3f15b8b7c7562       storage-provisioner
	63f8c37bfb389       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   fc99bfc6d28f3       kube-proxy-k9xlt
	9cca79d149dd1       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   85d2717ead518       kube-controller-manager-test-preload-214349
	93a3b9b0e63dc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   ccb015e9b0a25       etcd-test-preload-214349
	0dc48435cc70f       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   994913d9f463f       kube-apiserver-test-preload-214349
	5fd3ccf655da3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   23136094776ed       kube-scheduler-test-preload-214349
	
	
	==> coredns [8d8870920be79bab93604d91a272d58aee4d599869b6a65e9f7d1c9a835ebdba] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44744 - 56244 "HINFO IN 3806376677256271298.8263111179127827267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009961619s
	
	
	==> describe nodes <==
	Name:               test-preload-214349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-214349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=test-preload-214349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_30_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:30:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-214349
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:31:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:31:43 +0000   Thu, 04 Apr 2024 22:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:31:43 +0000   Thu, 04 Apr 2024 22:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:31:43 +0000   Thu, 04 Apr 2024 22:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:31:43 +0000   Thu, 04 Apr 2024 22:31:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    test-preload-214349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7beeeccdf50b4d269265773a89c50a62
	  System UUID:                7beeeccd-f50b-4d26-9265-773a89c50a62
	  Boot ID:                    fafabb52-dd68-4d4f-a2d2-5ea5e524745b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dv84q                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                 etcd-test-preload-214349                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-test-preload-214349             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-test-preload-214349    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-k9xlt                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-test-preload-214349             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x3 over 108s)  kubelet          Node test-preload-214349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s (x2 over 108s)  kubelet          Node test-preload-214349 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s (x3 over 108s)  kubelet          Node test-preload-214349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node test-preload-214349 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node test-preload-214349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node test-preload-214349 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                90s                  kubelet          Node test-preload-214349 status is now: NodeReady
	  Normal  RegisteredNode           87s                  node-controller  Node test-preload-214349 event: Registered Node test-preload-214349 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-214349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-214349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-214349 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node test-preload-214349 event: Registered Node test-preload-214349 in Controller
	
	
	==> dmesg <==
	[Apr 4 22:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052892] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041602] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.569451] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr 4 22:31] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.733413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000065] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.089083] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.060488] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064132] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.204638] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137133] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.305136] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[ +13.020829] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.065454] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.743047] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +4.665367] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.542923] systemd-fstab-generator[1702]: Ignoring "noauto" option for root device
	[  +6.020161] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [93a3b9b0e63dc340420679fb7b5110c975d1f853e9fb495fc9c01fe496d3b870] <==
	{"level":"info","ts":"2024-04-04T22:31:29.281Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"38b26e584d45e0da","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-04T22:31:29.298Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-04T22:31:29.305Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T22:31:29.308Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T22:31:29.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da switched to configuration voters=(4085449137511063770)"}
	{"level":"info","ts":"2024-04-04T22:31:29.310Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-04T22:31:29.310Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","added-peer-id":"38b26e584d45e0da","added-peer-peer-urls":["https://192.168.39.38:2380"]}
	{"level":"info","ts":"2024-04-04T22:31:29.306Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2024-04-04T22:31:29.310Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2024-04-04T22:31:29.310Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:31:29.310Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 3"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 3"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2024-04-04T22:31:30.946Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:test-preload-214349 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:31:30.947Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:31:30.947Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:31:30.948Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2024-04-04T22:31:30.948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:31:30.948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:31:30.949Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:31:47 up 0 min,  0 users,  load average: 1.04, 0.28, 0.09
	Linux test-preload-214349 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0dc48435cc70ffbede67500f9ccfef3b10608295c4043f70d8f86d4728d35aa8] <==
	I0404 22:31:33.382453       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0404 22:31:33.382493       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0404 22:31:33.382531       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0404 22:31:33.382535       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0404 22:31:33.382566       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 22:31:33.382875       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 22:31:33.442488       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 22:31:33.474522       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0404 22:31:33.478508       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 22:31:33.481070       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0404 22:31:33.485472       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0404 22:31:33.485547       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 22:31:33.485924       1 cache.go:39] Caches are synced for autoregister controller
	E0404 22:31:33.488110       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0404 22:31:33.506739       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0404 22:31:34.067718       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0404 22:31:34.384586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 22:31:34.897575       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0404 22:31:34.912763       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0404 22:31:34.983399       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0404 22:31:35.019978       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 22:31:35.031758       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0404 22:31:35.313125       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0404 22:31:46.432110       1 controller.go:611] quota admission added evaluator for: endpoints
	I0404 22:31:46.483158       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9cca79d149dd1d41393b767ce23186b943d2b00c8575d57128b1f4cdb0786e40] <==
	I0404 22:31:46.465386       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0404 22:31:46.468143       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0404 22:31:46.471203       1 shared_informer.go:262] Caches are synced for deployment
	I0404 22:31:46.472791       1 shared_informer.go:262] Caches are synced for cronjob
	I0404 22:31:46.474511       1 shared_informer.go:262] Caches are synced for crt configmap
	I0404 22:31:46.479110       1 shared_informer.go:262] Caches are synced for ephemeral
	I0404 22:31:46.479677       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0404 22:31:46.479778       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0404 22:31:46.483535       1 shared_informer.go:262] Caches are synced for GC
	I0404 22:31:46.484979       1 shared_informer.go:262] Caches are synced for disruption
	I0404 22:31:46.485010       1 disruption.go:371] Sending events to api server.
	I0404 22:31:46.487543       1 shared_informer.go:262] Caches are synced for TTL
	I0404 22:31:46.567789       1 shared_informer.go:262] Caches are synced for expand
	I0404 22:31:46.567948       1 shared_informer.go:262] Caches are synced for persistent volume
	I0404 22:31:46.575940       1 shared_informer.go:262] Caches are synced for attach detach
	I0404 22:31:46.581862       1 shared_informer.go:262] Caches are synced for PV protection
	I0404 22:31:46.624606       1 shared_informer.go:262] Caches are synced for namespace
	I0404 22:31:46.658115       1 shared_informer.go:262] Caches are synced for stateful set
	I0404 22:31:46.662654       1 shared_informer.go:262] Caches are synced for resource quota
	I0404 22:31:46.662750       1 shared_informer.go:262] Caches are synced for daemon sets
	I0404 22:31:46.664085       1 shared_informer.go:262] Caches are synced for service account
	I0404 22:31:46.702199       1 shared_informer.go:262] Caches are synced for resource quota
	I0404 22:31:47.118182       1 shared_informer.go:262] Caches are synced for garbage collector
	I0404 22:31:47.153951       1 shared_informer.go:262] Caches are synced for garbage collector
	I0404 22:31:47.153991       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [63f8c37bfb3894b6fa82cf957deccc752add2627d681df4b2612c35c335e6d12] <==
	I0404 22:31:35.271451       1 node.go:163] Successfully retrieved node IP: 192.168.39.38
	I0404 22:31:35.271643       1 server_others.go:138] "Detected node IP" address="192.168.39.38"
	I0404 22:31:35.271762       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0404 22:31:35.305260       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0404 22:31:35.305329       1 server_others.go:206] "Using iptables Proxier"
	I0404 22:31:35.305359       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0404 22:31:35.306222       1 server.go:661] "Version info" version="v1.24.4"
	I0404 22:31:35.306283       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:31:35.307538       1 config.go:317] "Starting service config controller"
	I0404 22:31:35.308178       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0404 22:31:35.308246       1 config.go:226] "Starting endpoint slice config controller"
	I0404 22:31:35.308265       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0404 22:31:35.309222       1 config.go:444] "Starting node config controller"
	I0404 22:31:35.309329       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0404 22:31:35.408634       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0404 22:31:35.408759       1 shared_informer.go:262] Caches are synced for service config
	I0404 22:31:35.409539       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5fd3ccf655da30c8ebdee6736c6e0c69e28c50ac21441b6acbe5f3e8225bd990] <==
	I0404 22:31:30.155455       1 serving.go:348] Generated self-signed cert in-memory
	I0404 22:31:33.514015       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0404 22:31:33.514132       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:31:33.544185       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0404 22:31:33.544580       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0404 22:31:33.544693       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:31:33.544768       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:31:33.544848       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0404 22:31:33.544899       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0404 22:31:33.547586       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0404 22:31:33.548196       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:31:33.645168       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0404 22:31:33.645465       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:31:33.646114       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 04 22:31:33 test-preload-214349 kubelet[1077]: I0404 22:31:33.521198    1077 setters.go:532] "Node became not ready" node="test-preload-214349" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-04 22:31:33.52115478 +0000 UTC m=+5.611814300 LastTransitionTime:2024-04-04 22:31:33.52115478 +0000 UTC m=+5.611814300 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.043088    1077 apiserver.go:52] "Watching apiserver"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.047790    1077 topology_manager.go:200] "Topology Admit Handler"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.047883    1077 topology_manager.go:200] "Topology Admit Handler"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.047969    1077 topology_manager.go:200] "Topology Admit Handler"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: E0404 22:31:34.050439    1077 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dv84q" podUID=7b7b9216-cdb7-4058-be32-47508983cb98
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.165590    1077 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e0818bb3-764c-4f33-9d19-49a1e5bfa18d path="/var/lib/kubelet/pods/e0818bb3-764c-4f33-9d19-49a1e5bfa18d/volumes"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186121    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5452f7d3-b135-483f-af00-2cf75e23dedf-kube-proxy\") pod \"kube-proxy-k9xlt\" (UID: \"5452f7d3-b135-483f-af00-2cf75e23dedf\") " pod="kube-system/kube-proxy-k9xlt"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186291    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5452f7d3-b135-483f-af00-2cf75e23dedf-xtables-lock\") pod \"kube-proxy-k9xlt\" (UID: \"5452f7d3-b135-483f-af00-2cf75e23dedf\") " pod="kube-system/kube-proxy-k9xlt"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186483    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmf8\" (UniqueName: \"kubernetes.io/projected/5452f7d3-b135-483f-af00-2cf75e23dedf-kube-api-access-5vmf8\") pod \"kube-proxy-k9xlt\" (UID: \"5452f7d3-b135-483f-af00-2cf75e23dedf\") " pod="kube-system/kube-proxy-k9xlt"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186549    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume\") pod \"coredns-6d4b75cb6d-dv84q\" (UID: \"7b7b9216-cdb7-4058-be32-47508983cb98\") " pod="kube-system/coredns-6d4b75cb6d-dv84q"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186641    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9cbm\" (UniqueName: \"kubernetes.io/projected/62e494b5-e25a-4f54-859f-ce54de8c305e-kube-api-access-l9cbm\") pod \"storage-provisioner\" (UID: \"62e494b5-e25a-4f54-859f-ce54de8c305e\") " pod="kube-system/storage-provisioner"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186697    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5452f7d3-b135-483f-af00-2cf75e23dedf-lib-modules\") pod \"kube-proxy-k9xlt\" (UID: \"5452f7d3-b135-483f-af00-2cf75e23dedf\") " pod="kube-system/kube-proxy-k9xlt"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186728    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62e494b5-e25a-4f54-859f-ce54de8c305e-tmp\") pod \"storage-provisioner\" (UID: \"62e494b5-e25a-4f54-859f-ce54de8c305e\") " pod="kube-system/storage-provisioner"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186747    1077 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422dd\" (UniqueName: \"kubernetes.io/projected/7b7b9216-cdb7-4058-be32-47508983cb98-kube-api-access-422dd\") pod \"coredns-6d4b75cb6d-dv84q\" (UID: \"7b7b9216-cdb7-4058-be32-47508983cb98\") " pod="kube-system/coredns-6d4b75cb6d-dv84q"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: I0404 22:31:34.186768    1077 reconciler.go:159] "Reconciler: start to sync state"
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: E0404 22:31:34.291233    1077 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: E0404 22:31:34.291656    1077 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume podName:7b7b9216-cdb7-4058-be32-47508983cb98 nodeName:}" failed. No retries permitted until 2024-04-04 22:31:34.791540594 +0000 UTC m=+6.882200152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume") pod "coredns-6d4b75cb6d-dv84q" (UID: "7b7b9216-cdb7-4058-be32-47508983cb98") : object "kube-system"/"coredns" not registered
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: E0404 22:31:34.794680    1077 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 04 22:31:34 test-preload-214349 kubelet[1077]: E0404 22:31:34.794778    1077 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume podName:7b7b9216-cdb7-4058-be32-47508983cb98 nodeName:}" failed. No retries permitted until 2024-04-04 22:31:35.794753093 +0000 UTC m=+7.885412624 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume") pod "coredns-6d4b75cb6d-dv84q" (UID: "7b7b9216-cdb7-4058-be32-47508983cb98") : object "kube-system"/"coredns" not registered
	Apr 04 22:31:35 test-preload-214349 kubelet[1077]: E0404 22:31:35.802196    1077 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 04 22:31:35 test-preload-214349 kubelet[1077]: E0404 22:31:35.802691    1077 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume podName:7b7b9216-cdb7-4058-be32-47508983cb98 nodeName:}" failed. No retries permitted until 2024-04-04 22:31:37.802666983 +0000 UTC m=+9.893326515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume") pod "coredns-6d4b75cb6d-dv84q" (UID: "7b7b9216-cdb7-4058-be32-47508983cb98") : object "kube-system"/"coredns" not registered
	Apr 04 22:31:36 test-preload-214349 kubelet[1077]: E0404 22:31:36.155267    1077 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dv84q" podUID=7b7b9216-cdb7-4058-be32-47508983cb98
	Apr 04 22:31:37 test-preload-214349 kubelet[1077]: E0404 22:31:37.820521    1077 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 04 22:31:37 test-preload-214349 kubelet[1077]: E0404 22:31:37.820662    1077 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume podName:7b7b9216-cdb7-4058-be32-47508983cb98 nodeName:}" failed. No retries permitted until 2024-04-04 22:31:41.820634838 +0000 UTC m=+13.911294360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7b7b9216-cdb7-4058-be32-47508983cb98-config-volume") pod "coredns-6d4b75cb6d-dv84q" (UID: "7b7b9216-cdb7-4058-be32-47508983cb98") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7e555f344a3221212f77e1d27b31ff176074317f6a1bf2d600de8388b40e84a1] <==
	I0404 22:31:35.255871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-214349 -n test-preload-214349
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-214349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-214349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-214349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-214349: (1.117022664s)
--- FAIL: TestPreload (244.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.623777047s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-013199] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-013199" primary control-plane node in "kubernetes-upgrade-013199" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:33:46.144559   42937 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:33:46.144909   42937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:33:46.144922   42937 out.go:304] Setting ErrFile to fd 2...
	I0404 22:33:46.144930   42937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:33:46.148096   42937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:33:46.148989   42937 out.go:298] Setting JSON to false
	I0404 22:33:46.150333   42937 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4572,"bootTime":1712265455,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:33:46.150440   42937 start.go:139] virtualization: kvm guest
	I0404 22:33:46.153123   42937 out.go:177] * [kubernetes-upgrade-013199] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:33:46.157118   42937 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:33:46.155357   42937 notify.go:220] Checking for updates...
	I0404 22:33:46.161061   42937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:33:46.162551   42937 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:33:46.166822   42937 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:33:46.168669   42937 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:33:46.170511   42937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:33:46.172954   42937 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:33:46.215186   42937 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 22:33:46.217007   42937 start.go:297] selected driver: kvm2
	I0404 22:33:46.217025   42937 start.go:901] validating driver "kvm2" against <nil>
	I0404 22:33:46.217036   42937 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:33:46.217770   42937 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:33:46.217855   42937 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:33:46.233353   42937 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:33:46.233422   42937 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 22:33:46.233658   42937 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0404 22:33:46.233708   42937 cni.go:84] Creating CNI manager for ""
	I0404 22:33:46.233721   42937 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:33:46.233730   42937 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 22:33:46.233783   42937 start.go:340] cluster config:
	{Name:kubernetes-upgrade-013199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:33:46.233890   42937 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:33:46.236062   42937 out.go:177] * Starting "kubernetes-upgrade-013199" primary control-plane node in "kubernetes-upgrade-013199" cluster
	I0404 22:33:46.237557   42937 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:33:46.237608   42937 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:33:46.237629   42937 cache.go:56] Caching tarball of preloaded images
	I0404 22:33:46.237728   42937 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:33:46.237741   42937 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:33:46.238070   42937 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/config.json ...
	I0404 22:33:46.238095   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/config.json: {Name:mk410a420f42a21bd9cee3006dae766586977bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:33:46.238246   42937 start.go:360] acquireMachinesLock for kubernetes-upgrade-013199: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:33:46.238289   42937 start.go:364] duration metric: took 21.32µs to acquireMachinesLock for "kubernetes-upgrade-013199"
	I0404 22:33:46.238313   42937 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-013199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:33:46.238384   42937 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 22:33:46.240334   42937 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 22:33:46.240480   42937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:33:46.240526   42937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:33:46.256353   42937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0404 22:33:46.256902   42937 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:33:46.257495   42937 main.go:141] libmachine: Using API Version  1
	I0404 22:33:46.257514   42937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:33:46.257919   42937 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:33:46.258113   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetMachineName
	I0404 22:33:46.258289   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:33:46.258442   42937 start.go:159] libmachine.API.Create for "kubernetes-upgrade-013199" (driver="kvm2")
	I0404 22:33:46.258465   42937 client.go:168] LocalClient.Create starting
	I0404 22:33:46.258489   42937 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 22:33:46.258524   42937 main.go:141] libmachine: Decoding PEM data...
	I0404 22:33:46.258538   42937 main.go:141] libmachine: Parsing certificate...
	I0404 22:33:46.258587   42937 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 22:33:46.258606   42937 main.go:141] libmachine: Decoding PEM data...
	I0404 22:33:46.258620   42937 main.go:141] libmachine: Parsing certificate...
	I0404 22:33:46.258642   42937 main.go:141] libmachine: Running pre-create checks...
	I0404 22:33:46.258650   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .PreCreateCheck
	I0404 22:33:46.259097   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetConfigRaw
	I0404 22:33:46.259456   42937 main.go:141] libmachine: Creating machine...
	I0404 22:33:46.259470   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .Create
	I0404 22:33:46.259612   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Creating KVM machine...
	I0404 22:33:46.260870   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found existing default KVM network
	I0404 22:33:46.261602   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:46.261456   43015 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0404 22:33:46.261629   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | created network xml: 
	I0404 22:33:46.261665   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | <network>
	I0404 22:33:46.261688   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   <name>mk-kubernetes-upgrade-013199</name>
	I0404 22:33:46.261700   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   <dns enable='no'/>
	I0404 22:33:46.261711   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   
	I0404 22:33:46.261727   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 22:33:46.261743   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |     <dhcp>
	I0404 22:33:46.261757   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 22:33:46.261769   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |     </dhcp>
	I0404 22:33:46.261780   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   </ip>
	I0404 22:33:46.261801   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG |   
	I0404 22:33:46.261817   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | </network>
	I0404 22:33:46.261833   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | 
	I0404 22:33:46.267143   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | trying to create private KVM network mk-kubernetes-upgrade-013199 192.168.39.0/24...
	I0404 22:33:46.348982   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199 ...
	I0404 22:33:46.349029   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 22:33:46.349048   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | private KVM network mk-kubernetes-upgrade-013199 192.168.39.0/24 created
	I0404 22:33:46.349070   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:46.348865   43015 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:33:46.349099   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 22:33:46.576996   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:46.576864   43015 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa...
	I0404 22:33:46.803811   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:46.803679   43015 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/kubernetes-upgrade-013199.rawdisk...
	I0404 22:33:46.803847   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Writing magic tar header
	I0404 22:33:46.803866   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Writing SSH key tar header
	I0404 22:33:46.803880   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:46.803815   43015 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199 ...
	I0404 22:33:46.804050   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199 (perms=drwx------)
	I0404 22:33:46.804080   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199
	I0404 22:33:46.804094   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 22:33:46.804112   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 22:33:46.804150   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 22:33:46.804165   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 22:33:46.804181   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 22:33:46.804209   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 22:33:46.804223   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:33:46.804265   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Creating domain...
	I0404 22:33:46.804289   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 22:33:46.804300   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 22:33:46.804314   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home/jenkins
	I0404 22:33:46.804327   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Checking permissions on dir: /home
	I0404 22:33:46.804342   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Skipping /home - not owner
	I0404 22:33:46.805495   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) define libvirt domain using xml: 
	I0404 22:33:46.805516   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) <domain type='kvm'>
	I0404 22:33:46.805528   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <name>kubernetes-upgrade-013199</name>
	I0404 22:33:46.805536   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <memory unit='MiB'>2200</memory>
	I0404 22:33:46.805546   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <vcpu>2</vcpu>
	I0404 22:33:46.805558   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <features>
	I0404 22:33:46.805566   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <acpi/>
	I0404 22:33:46.805575   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <apic/>
	I0404 22:33:46.805581   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <pae/>
	I0404 22:33:46.805594   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     
	I0404 22:33:46.805606   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   </features>
	I0404 22:33:46.805617   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <cpu mode='host-passthrough'>
	I0404 22:33:46.805630   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   
	I0404 22:33:46.805644   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   </cpu>
	I0404 22:33:46.805672   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <os>
	I0404 22:33:46.805700   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <type>hvm</type>
	I0404 22:33:46.805728   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <boot dev='cdrom'/>
	I0404 22:33:46.805798   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <boot dev='hd'/>
	I0404 22:33:46.805823   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <bootmenu enable='no'/>
	I0404 22:33:46.805832   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   </os>
	I0404 22:33:46.805842   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   <devices>
	I0404 22:33:46.805851   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <disk type='file' device='cdrom'>
	I0404 22:33:46.805864   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/boot2docker.iso'/>
	I0404 22:33:46.805881   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <target dev='hdc' bus='scsi'/>
	I0404 22:33:46.805891   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <readonly/>
	I0404 22:33:46.805897   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </disk>
	I0404 22:33:46.805910   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <disk type='file' device='disk'>
	I0404 22:33:46.805938   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 22:33:46.805957   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/kubernetes-upgrade-013199.rawdisk'/>
	I0404 22:33:46.805977   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <target dev='hda' bus='virtio'/>
	I0404 22:33:46.805986   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </disk>
	I0404 22:33:46.805995   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <interface type='network'>
	I0404 22:33:46.806008   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <source network='mk-kubernetes-upgrade-013199'/>
	I0404 22:33:46.806024   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <model type='virtio'/>
	I0404 22:33:46.806036   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </interface>
	I0404 22:33:46.806044   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <interface type='network'>
	I0404 22:33:46.806057   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <source network='default'/>
	I0404 22:33:46.806068   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <model type='virtio'/>
	I0404 22:33:46.806076   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </interface>
	I0404 22:33:46.806087   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <serial type='pty'>
	I0404 22:33:46.806115   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <target port='0'/>
	I0404 22:33:46.806134   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </serial>
	I0404 22:33:46.806149   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <console type='pty'>
	I0404 22:33:46.806162   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <target type='serial' port='0'/>
	I0404 22:33:46.806175   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </console>
	I0404 22:33:46.806186   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     <rng model='virtio'>
	I0404 22:33:46.806197   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)       <backend model='random'>/dev/random</backend>
	I0404 22:33:46.806206   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     </rng>
	I0404 22:33:46.806232   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     
	I0404 22:33:46.806248   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)     
	I0404 22:33:46.806258   42937 main.go:141] libmachine: (kubernetes-upgrade-013199)   </devices>
	I0404 22:33:46.806265   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) </domain>
	I0404 22:33:46.806276   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) 
	I0404 22:33:46.811232   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:37:fb:66 in network default
	I0404 22:33:46.812010   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Ensuring networks are active...
	I0404 22:33:46.812035   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:46.812767   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Ensuring network default is active
	I0404 22:33:46.813173   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Ensuring network mk-kubernetes-upgrade-013199 is active
	I0404 22:33:46.814508   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Getting domain xml...
	I0404 22:33:46.815387   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Creating domain...
	I0404 22:33:48.194873   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Waiting to get IP...
	I0404 22:33:48.195999   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.196491   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.196541   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:48.196473   43015 retry.go:31] will retry after 301.571551ms: waiting for machine to come up
	I0404 22:33:48.500105   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.500564   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.500591   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:48.500512   43015 retry.go:31] will retry after 388.258156ms: waiting for machine to come up
	I0404 22:33:48.889918   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.890351   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:48.890378   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:48.890323   43015 retry.go:31] will retry after 414.269153ms: waiting for machine to come up
	I0404 22:33:49.306684   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:49.307140   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:49.307173   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:49.307075   43015 retry.go:31] will retry after 554.029841ms: waiting for machine to come up
	I0404 22:33:49.862467   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:49.862966   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:49.862987   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:49.862915   43015 retry.go:31] will retry after 648.392803ms: waiting for machine to come up
	I0404 22:33:50.512756   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:50.513153   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:50.513179   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:50.513109   43015 retry.go:31] will retry after 755.357826ms: waiting for machine to come up
	I0404 22:33:51.270383   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:51.270853   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:51.270893   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:51.270795   43015 retry.go:31] will retry after 999.585425ms: waiting for machine to come up
	I0404 22:33:52.272303   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:52.272775   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:52.272806   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:52.272702   43015 retry.go:31] will retry after 1.098602411s: waiting for machine to come up
	I0404 22:33:53.373156   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:53.373659   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:53.373690   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:53.373593   43015 retry.go:31] will retry after 1.222326633s: waiting for machine to come up
	I0404 22:33:54.597260   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:54.597630   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:54.597655   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:54.597591   43015 retry.go:31] will retry after 1.658330515s: waiting for machine to come up
	I0404 22:33:56.258131   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:56.258622   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:56.258664   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:56.258587   43015 retry.go:31] will retry after 2.49940639s: waiting for machine to come up
	I0404 22:33:58.761366   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:33:58.761787   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:33:58.761810   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:33:58.761739   43015 retry.go:31] will retry after 2.217252494s: waiting for machine to come up
	I0404 22:34:00.982042   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:00.982565   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:34:00.982587   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:34:00.982509   43015 retry.go:31] will retry after 3.519345626s: waiting for machine to come up
	I0404 22:34:04.505648   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:04.506032   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find current IP address of domain kubernetes-upgrade-013199 in network mk-kubernetes-upgrade-013199
	I0404 22:34:04.506061   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | I0404 22:34:04.505999   43015 retry.go:31] will retry after 3.873859447s: waiting for machine to come up
	I0404 22:34:08.382948   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.383412   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Found IP for machine: 192.168.39.229
	I0404 22:34:08.383432   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has current primary IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.383438   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Reserving static IP address...
	I0404 22:34:08.383841   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-013199", mac: "52:54:00:3b:00:1e", ip: "192.168.39.229"} in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.462389   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Getting to WaitForSSH function...
	I0404 22:34:08.462428   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Reserved static IP address: 192.168.39.229
	I0404 22:34:08.462442   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Waiting for SSH to be available...
	I0404 22:34:08.465221   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.465754   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:08.465786   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.465940   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Using SSH client type: external
	I0404 22:34:08.465971   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa (-rw-------)
	I0404 22:34:08.466016   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:34:08.466038   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | About to run SSH command:
	I0404 22:34:08.466054   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | exit 0
	I0404 22:34:08.596359   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | SSH cmd err, output: <nil>: 
	I0404 22:34:08.596659   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) KVM machine creation complete!
	I0404 22:34:08.597020   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetConfigRaw
	I0404 22:34:08.597533   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:08.597764   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:08.597934   42937 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 22:34:08.597945   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetState
	I0404 22:34:08.599273   42937 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 22:34:08.599285   42937 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 22:34:08.599300   42937 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 22:34:08.599307   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:08.601632   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.602087   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:08.602124   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.602218   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:08.602426   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.602593   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.602776   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:08.602972   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:08.603180   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:08.603202   42937 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 22:34:08.715813   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:34:08.715845   42937 main.go:141] libmachine: Detecting the provisioner...
	I0404 22:34:08.715853   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:08.718787   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.719162   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:08.719201   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.719398   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:08.719604   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.719809   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.719960   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:08.720138   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:08.720355   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:08.720378   42937 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 22:34:08.833238   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 22:34:08.833303   42937 main.go:141] libmachine: found compatible host: buildroot
	I0404 22:34:08.833309   42937 main.go:141] libmachine: Provisioning with buildroot...
	I0404 22:34:08.833329   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetMachineName
	I0404 22:34:08.833602   42937 buildroot.go:166] provisioning hostname "kubernetes-upgrade-013199"
	I0404 22:34:08.833645   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetMachineName
	I0404 22:34:08.833831   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:08.836461   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.836956   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:08.836995   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.837132   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:08.837301   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.837535   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.837684   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:08.837895   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:08.838069   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:08.838082   42937 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-013199 && echo "kubernetes-upgrade-013199" | sudo tee /etc/hostname
	I0404 22:34:08.970912   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-013199
	
	I0404 22:34:08.970934   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:08.973843   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.974305   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:08.974328   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:08.974578   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:08.974795   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.974951   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:08.975086   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:08.975196   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:08.975366   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:08.975389   42937 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-013199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-013199/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-013199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:34:09.097917   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:34:09.097955   42937 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:34:09.098008   42937 buildroot.go:174] setting up certificates
	I0404 22:34:09.098036   42937 provision.go:84] configureAuth start
	I0404 22:34:09.098054   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetMachineName
	I0404 22:34:09.098447   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetIP
	I0404 22:34:09.101538   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.101938   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.101989   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.102149   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:09.104543   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.104938   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.104974   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.105153   42937 provision.go:143] copyHostCerts
	I0404 22:34:09.105200   42937 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:34:09.105210   42937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:34:09.105269   42937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:34:09.105362   42937 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:34:09.105370   42937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:34:09.105395   42937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:34:09.105475   42937 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:34:09.105482   42937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:34:09.105506   42937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:34:09.105566   42937 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-013199 san=[127.0.0.1 192.168.39.229 kubernetes-upgrade-013199 localhost minikube]
	I0404 22:34:09.293581   42937 provision.go:177] copyRemoteCerts
	I0404 22:34:09.293641   42937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:34:09.293664   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:09.296540   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.296899   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.296922   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.297153   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:09.297365   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.297538   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:09.297705   42937 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa Username:docker}
	I0404 22:34:09.386756   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:34:09.412376   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0404 22:34:09.438644   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:34:09.464153   42937 provision.go:87] duration metric: took 366.102264ms to configureAuth
	I0404 22:34:09.464199   42937 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:34:09.464408   42937 config.go:182] Loaded profile config "kubernetes-upgrade-013199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:34:09.464492   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:09.467266   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.467727   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.467759   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.467917   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:09.468140   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.468319   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.468468   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:09.468612   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:09.468759   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:09.468777   42937 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:34:09.745485   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:34:09.745513   42937 main.go:141] libmachine: Checking connection to Docker...
	I0404 22:34:09.745523   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetURL
	I0404 22:34:09.746843   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | Using libvirt version 6000000
	I0404 22:34:09.749226   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.749725   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.749762   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.749902   42937 main.go:141] libmachine: Docker is up and running!
	I0404 22:34:09.749921   42937 main.go:141] libmachine: Reticulating splines...
	I0404 22:34:09.749928   42937 client.go:171] duration metric: took 23.491456631s to LocalClient.Create
	I0404 22:34:09.749951   42937 start.go:167] duration metric: took 23.491508906s to libmachine.API.Create "kubernetes-upgrade-013199"
	I0404 22:34:09.749965   42937 start.go:293] postStartSetup for "kubernetes-upgrade-013199" (driver="kvm2")
	I0404 22:34:09.749978   42937 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:34:09.749997   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:09.750273   42937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:34:09.750302   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:09.752677   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.753010   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.753042   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.753152   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:09.753336   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.753509   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:09.753661   42937 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa Username:docker}
	I0404 22:34:09.843025   42937 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:34:09.848095   42937 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:34:09.848134   42937 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:34:09.848212   42937 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:34:09.848308   42937 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:34:09.848409   42937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:34:09.858751   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:34:09.886200   42937 start.go:296] duration metric: took 136.222438ms for postStartSetup
	I0404 22:34:09.886249   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetConfigRaw
	I0404 22:34:09.886835   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetIP
	I0404 22:34:09.889662   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.890024   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.890097   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.890419   42937 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/config.json ...
	I0404 22:34:09.890642   42937 start.go:128] duration metric: took 23.652247241s to createHost
	I0404 22:34:09.890669   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:09.893662   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.894037   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:09.894064   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:09.894207   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:09.894378   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.894523   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:09.894687   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:09.894860   42937 main.go:141] libmachine: Using SSH client type: native
	I0404 22:34:09.895084   42937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0404 22:34:09.895100   42937 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0404 22:34:10.009177   42937 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712270049.985812851
	
	I0404 22:34:10.009207   42937 fix.go:216] guest clock: 1712270049.985812851
	I0404 22:34:10.009216   42937 fix.go:229] Guest: 2024-04-04 22:34:09.985812851 +0000 UTC Remote: 2024-04-04 22:34:09.890655965 +0000 UTC m=+23.799782649 (delta=95.156886ms)
	I0404 22:34:10.009240   42937 fix.go:200] guest clock delta is within tolerance: 95.156886ms
	I0404 22:34:10.009247   42937 start.go:83] releasing machines lock for "kubernetes-upgrade-013199", held for 23.770946148s
	I0404 22:34:10.009294   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:10.009566   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetIP
	I0404 22:34:10.012530   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.012933   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:10.012969   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.013201   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:10.013785   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:10.013958   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .DriverName
	I0404 22:34:10.014034   42937 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:34:10.014075   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:10.014142   42937 ssh_runner.go:195] Run: cat /version.json
	I0404 22:34:10.014169   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHHostname
	I0404 22:34:10.016834   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.016913   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.017182   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:10.017204   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.017242   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:10.017266   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:10.017350   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:10.017480   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHPort
	I0404 22:34:10.017561   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:10.017652   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHKeyPath
	I0404 22:34:10.017727   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:10.017794   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetSSHUsername
	I0404 22:34:10.017847   42937 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa Username:docker}
	I0404 22:34:10.017935   42937 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/kubernetes-upgrade-013199/id_rsa Username:docker}
	I0404 22:34:10.105756   42937 ssh_runner.go:195] Run: systemctl --version
	I0404 22:34:10.142345   42937 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:34:10.306637   42937 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:34:10.313823   42937 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:34:10.313889   42937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:34:10.334042   42937 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:34:10.334064   42937 start.go:494] detecting cgroup driver to use...
	I0404 22:34:10.334133   42937 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:34:10.351853   42937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:34:10.368396   42937 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:34:10.368456   42937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:34:10.384380   42937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:34:10.401079   42937 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:34:10.520853   42937 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:34:10.690175   42937 docker.go:233] disabling docker service ...
	I0404 22:34:10.690258   42937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:34:10.706719   42937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:34:10.721031   42937 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:34:10.856982   42937 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:34:11.002020   42937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:34:11.018335   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:34:11.042402   42937 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:34:11.042473   42937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:34:11.057371   42937 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:34:11.057435   42937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:34:11.072361   42937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:34:11.084539   42937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:34:11.096161   42937 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:34:11.107902   42937 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:34:11.117964   42937 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:34:11.118014   42937 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:34:11.132093   42937 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:34:11.142512   42937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:34:11.285290   42937 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:34:11.454149   42937 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:34:11.454207   42937 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:34:11.459259   42937 start.go:562] Will wait 60s for crictl version
	I0404 22:34:11.459314   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:11.465463   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:34:11.507722   42937 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:34:11.507788   42937 ssh_runner.go:195] Run: crio --version
	I0404 22:34:11.543843   42937 ssh_runner.go:195] Run: crio --version
	I0404 22:34:11.574733   42937 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:34:11.576085   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) Calling .GetIP
	I0404 22:34:11.579032   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:11.579412   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:00:1e", ip: ""} in network mk-kubernetes-upgrade-013199: {Iface:virbr1 ExpiryTime:2024-04-04 23:34:02 +0000 UTC Type:0 Mac:52:54:00:3b:00:1e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:kubernetes-upgrade-013199 Clientid:01:52:54:00:3b:00:1e}
	I0404 22:34:11.579455   42937 main.go:141] libmachine: (kubernetes-upgrade-013199) DBG | domain kubernetes-upgrade-013199 has defined IP address 192.168.39.229 and MAC address 52:54:00:3b:00:1e in network mk-kubernetes-upgrade-013199
	I0404 22:34:11.579645   42937 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:34:11.583852   42937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:34:11.599024   42937 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-013199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:34:11.599116   42937 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:34:11.599181   42937 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:34:11.635407   42937 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:34:11.635486   42937 ssh_runner.go:195] Run: which lz4
	I0404 22:34:11.639896   42937 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0404 22:34:11.644785   42937 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:34:11.644833   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:34:13.520800   42937 crio.go:462] duration metric: took 1.880927645s to copy over tarball
	I0404 22:34:13.520877   42937 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:34:16.133301   42937 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.612402582s)
	I0404 22:34:16.133333   42937 crio.go:469] duration metric: took 2.612501268s to extract the tarball
	I0404 22:34:16.133342   42937 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:34:16.199798   42937 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:34:16.251860   42937 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:34:16.251885   42937 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:34:16.251974   42937 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:34:16.252019   42937 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:34:16.251978   42937 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:34:16.252025   42937 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:34:16.252033   42937 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:34:16.251978   42937 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:34:16.252053   42937 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:34:16.252044   42937 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:34:16.253678   42937 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:34:16.254010   42937 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:34:16.254011   42937 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:34:16.254015   42937 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:34:16.254064   42937 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:34:16.254016   42937 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:34:16.254035   42937 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:34:16.254065   42937 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:34:16.428201   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:34:16.455461   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:34:16.491310   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:34:16.493326   42937 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:34:16.493375   42937 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:34:16.493419   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.496609   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:34:16.514440   42937 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:34:16.514494   42937 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:34:16.514546   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.522531   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:34:16.551484   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:34:16.576218   42937 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:34:16.576265   42937 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:34:16.576315   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.576322   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:34:16.576418   42937 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:34:16.576448   42937 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:34:16.576451   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:34:16.576485   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.630442   42937 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:34:16.630486   42937 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:34:16.630534   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.633736   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:34:16.692382   42937 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:34:16.692427   42937 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:34:16.692472   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.692478   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:34:16.692551   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:34:16.692599   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:34:16.692641   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:34:16.692698   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:34:16.748816   42937 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:34:16.748878   42937 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:34:16.748893   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:34:16.748916   42937 ssh_runner.go:195] Run: which crictl
	I0404 22:34:16.785938   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:34:16.786028   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:34:16.786078   42937 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:34:16.786283   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:34:16.832300   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:34:16.832458   42937 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:34:17.135697   42937 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:34:17.278454   42937 cache_images.go:92] duration metric: took 1.026551719s to LoadCachedImages
	W0404 22:34:17.278574   42937 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0404 22:34:17.278595   42937 kubeadm.go:928] updating node { 192.168.39.229 8443 v1.20.0 crio true true} ...
	I0404 22:34:17.278734   42937 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-013199 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:34:17.278822   42937 ssh_runner.go:195] Run: crio config
	I0404 22:34:17.340072   42937 cni.go:84] Creating CNI manager for ""
	I0404 22:34:17.340099   42937 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:34:17.340116   42937 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:34:17.340152   42937 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-013199 NodeName:kubernetes-upgrade-013199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:34:17.340330   42937 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-013199"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:34:17.340407   42937 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:34:17.352813   42937 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:34:17.352894   42937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:34:17.363677   42937 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0404 22:34:17.385891   42937 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:34:17.404250   42937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0404 22:34:17.425940   42937 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0404 22:34:17.430186   42937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:34:17.445386   42937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:34:17.584614   42937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:34:17.603659   42937 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199 for IP: 192.168.39.229
	I0404 22:34:17.603683   42937 certs.go:194] generating shared ca certs ...
	I0404 22:34:17.603703   42937 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:17.603863   42937 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:34:17.603920   42937 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:34:17.603932   42937 certs.go:256] generating profile certs ...
	I0404 22:34:17.604003   42937 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.key
	I0404 22:34:17.604021   42937 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.crt with IP's: []
	I0404 22:34:18.018349   42937 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.crt ...
	I0404 22:34:18.018385   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.crt: {Name:mk1d72abf1f39d493367f3e090cafdf5cfd58582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.018580   42937 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.key ...
	I0404 22:34:18.018602   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/client.key: {Name:mkeef7b51889d38feac0cc7819647c3791a7d357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.018711   42937 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key.2cf119bf
	I0404 22:34:18.018731   42937 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt.2cf119bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.229]
	I0404 22:34:18.177862   42937 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt.2cf119bf ...
	I0404 22:34:18.177893   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt.2cf119bf: {Name:mk9b41320e6a360e359e713784c1d80da60044cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.178066   42937 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key.2cf119bf ...
	I0404 22:34:18.178088   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key.2cf119bf: {Name:mkde8fe3397fe47a05d5a9d7ebc162c8b5e1353c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.178182   42937 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt.2cf119bf -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt
	I0404 22:34:18.178278   42937 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key.2cf119bf -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key
	I0404 22:34:18.178366   42937 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.key
	I0404 22:34:18.178388   42937 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.crt with IP's: []
	I0404 22:34:18.463421   42937 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.crt ...
	I0404 22:34:18.463451   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.crt: {Name:mk95d89566960879dd4db50753138682c66a1c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.463648   42937 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.key ...
	I0404 22:34:18.463665   42937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.key: {Name:mk9c93c0458706aba1869b3e0d37a5ba06b2a179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:34:18.463868   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:34:18.463906   42937 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:34:18.463921   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:34:18.463944   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:34:18.463965   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:34:18.463985   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:34:18.464023   42937 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:34:18.464687   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:34:18.493828   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:34:18.519307   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:34:18.544851   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:34:18.569860   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0404 22:34:18.606944   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:34:18.635993   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:34:18.665770   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:34:18.692904   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:34:18.722509   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:34:18.751241   42937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:34:18.780714   42937 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:34:18.799610   42937 ssh_runner.go:195] Run: openssl version
	I0404 22:34:18.805996   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:34:18.817839   42937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:34:18.822666   42937 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:34:18.822728   42937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:34:18.828979   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:34:18.840105   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:34:18.851850   42937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:34:18.857086   42937 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:34:18.857156   42937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:34:18.863218   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:34:18.874606   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:34:18.885706   42937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:34:18.890254   42937 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:34:18.890320   42937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:34:18.896224   42937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:34:18.907589   42937 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:34:18.912300   42937 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 22:34:18.912378   42937 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-013199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:34:18.912450   42937 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:34:18.912506   42937 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:34:18.957602   42937 cri.go:89] found id: ""
	I0404 22:34:18.957684   42937 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 22:34:18.969916   42937 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:34:18.980622   42937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:34:18.991499   42937 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:34:18.991520   42937 kubeadm.go:156] found existing configuration files:
	
	I0404 22:34:18.991566   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:34:19.004199   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:34:19.004268   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:34:19.017199   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:34:19.029203   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:34:19.029279   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:34:19.040799   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:34:19.050311   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:34:19.050372   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:34:19.060418   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:34:19.070268   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:34:19.070332   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:34:19.080250   42937 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:34:19.203945   42937 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:34:19.204012   42937 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:34:19.389580   42937 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:34:19.389750   42937 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:34:19.389904   42937 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:34:19.575107   42937 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:34:19.578312   42937 out.go:204]   - Generating certificates and keys ...
	I0404 22:34:19.578436   42937 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:34:19.578523   42937 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:34:19.677627   42937 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0404 22:34:19.771058   42937 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0404 22:34:19.994285   42937 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0404 22:34:20.160135   42937 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0404 22:34:20.514742   42937 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0404 22:34:20.515336   42937 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0404 22:34:20.672341   42937 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0404 22:34:20.672583   42937 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0404 22:34:21.127095   42937 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0404 22:34:21.274278   42937 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0404 22:34:21.334406   42937 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0404 22:34:21.334503   42937 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:34:21.410708   42937 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:34:21.464788   42937 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:34:21.638384   42937 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:34:21.805950   42937 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:34:21.825884   42937 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:34:21.827036   42937 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:34:21.827109   42937 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:34:21.982910   42937 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:34:21.985214   42937 out.go:204]   - Booting up control plane ...
	I0404 22:34:21.985387   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:34:21.990084   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:34:21.993743   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:34:21.993882   42937 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:34:21.998964   42937 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:35:01.994856   42937 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 22:35:01.995497   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:35:01.995691   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:35:06.996420   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:35:06.996699   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:35:16.997155   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:35:16.997493   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:35:36.998859   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:35:36.999143   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:36:16.999072   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:36:16.999337   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:36:16.999349   42937 kubeadm.go:309] 
	I0404 22:36:16.999397   42937 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 22:36:16.999465   42937 kubeadm.go:309] 		timed out waiting for the condition
	I0404 22:36:16.999472   42937 kubeadm.go:309] 
	I0404 22:36:16.999499   42937 kubeadm.go:309] 	This error is likely caused by:
	I0404 22:36:16.999561   42937 kubeadm.go:309] 		- The kubelet is not running
	I0404 22:36:16.999722   42937 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 22:36:16.999734   42937 kubeadm.go:309] 
	I0404 22:36:16.999901   42937 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 22:36:16.999967   42937 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 22:36:17.000017   42937 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 22:36:17.000028   42937 kubeadm.go:309] 
	I0404 22:36:17.000182   42937 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 22:36:17.000302   42937 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 22:36:17.000314   42937 kubeadm.go:309] 
	I0404 22:36:17.000514   42937 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 22:36:17.000615   42937 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 22:36:17.000720   42937 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 22:36:17.000816   42937 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 22:36:17.000829   42937 kubeadm.go:309] 
	I0404 22:36:17.001379   42937 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 22:36:17.001501   42937 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 22:36:17.001589   42937 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 22:36:17.001747   42937 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-013199 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 22:36:17.001808   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:36:19.135375   42937 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.133532227s)
	I0404 22:36:19.135460   42937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:36:19.162515   42937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:36:19.175021   42937 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:36:19.175048   42937 kubeadm.go:156] found existing configuration files:
	
	I0404 22:36:19.175108   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:36:19.188295   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:36:19.188359   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:36:19.203834   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:36:19.219023   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:36:19.219088   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:36:19.233449   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:36:19.248098   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:36:19.248191   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:36:19.263319   42937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:36:19.277822   42937 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:36:19.277891   42937 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:36:19.289243   42937 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:36:19.562211   42937 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 22:38:15.910194   42937 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 22:38:15.910331   42937 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 22:38:15.911714   42937 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:38:15.911777   42937 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:38:15.911872   42937 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:38:15.912061   42937 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:38:15.912233   42937 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:38:15.912325   42937 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:38:15.934279   42937 out.go:204]   - Generating certificates and keys ...
	I0404 22:38:15.934421   42937 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:38:15.934504   42937 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:38:15.934617   42937 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:38:15.934710   42937 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:38:15.934805   42937 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:38:15.934886   42937 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:38:15.934985   42937 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:38:15.935060   42937 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:38:15.935170   42937 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:38:15.935281   42937 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:38:15.935355   42937 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:38:15.935440   42937 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:38:15.935503   42937 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:38:15.935563   42937 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:38:15.935650   42937 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:38:15.935724   42937 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:38:15.935851   42937 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:38:15.935959   42937 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:38:15.936019   42937 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:38:15.936078   42937 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:38:15.938401   42937 out.go:204]   - Booting up control plane ...
	I0404 22:38:15.938509   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:38:15.938603   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:38:15.938682   42937 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:38:15.938799   42937 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:38:15.939004   42937 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:38:15.939074   42937 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 22:38:15.939177   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:38:15.939417   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:38:15.939538   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:38:15.939752   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:38:15.939850   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:38:15.940108   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:38:15.940223   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:38:15.940444   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:38:15.940524   42937 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:38:15.940740   42937 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:38:15.940757   42937 kubeadm.go:309] 
	I0404 22:38:15.940794   42937 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 22:38:15.940824   42937 kubeadm.go:309] 		timed out waiting for the condition
	I0404 22:38:15.940828   42937 kubeadm.go:309] 
	I0404 22:38:15.940859   42937 kubeadm.go:309] 	This error is likely caused by:
	I0404 22:38:15.940891   42937 kubeadm.go:309] 		- The kubelet is not running
	I0404 22:38:15.940995   42937 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 22:38:15.941008   42937 kubeadm.go:309] 
	I0404 22:38:15.941116   42937 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 22:38:15.941172   42937 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 22:38:15.941217   42937 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 22:38:15.941234   42937 kubeadm.go:309] 
	I0404 22:38:15.941360   42937 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 22:38:15.941470   42937 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 22:38:15.941478   42937 kubeadm.go:309] 
	I0404 22:38:15.941621   42937 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 22:38:15.941748   42937 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 22:38:15.941853   42937 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 22:38:15.941960   42937 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 22:38:15.942017   42937 kubeadm.go:309] 
	I0404 22:38:15.942099   42937 kubeadm.go:393] duration metric: took 3m57.029726051s to StartCluster
	I0404 22:38:15.942145   42937 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:38:15.942216   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:38:16.006772   42937 cri.go:89] found id: ""
	I0404 22:38:16.006801   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.006812   42937 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:38:16.006820   42937 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:38:16.006892   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:38:16.056037   42937 cri.go:89] found id: ""
	I0404 22:38:16.056079   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.056092   42937 logs.go:278] No container was found matching "etcd"
	I0404 22:38:16.056099   42937 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:38:16.056184   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:38:16.099454   42937 cri.go:89] found id: ""
	I0404 22:38:16.099480   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.099490   42937 logs.go:278] No container was found matching "coredns"
	I0404 22:38:16.099498   42937 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:38:16.099565   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:38:16.147723   42937 cri.go:89] found id: ""
	I0404 22:38:16.147750   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.147757   42937 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:38:16.147763   42937 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:38:16.147811   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:38:16.194517   42937 cri.go:89] found id: ""
	I0404 22:38:16.194547   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.194557   42937 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:38:16.194565   42937 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:38:16.194632   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:38:16.248886   42937 cri.go:89] found id: ""
	I0404 22:38:16.248914   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.248926   42937 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:38:16.248934   42937 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:38:16.248998   42937 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:38:16.306029   42937 cri.go:89] found id: ""
	I0404 22:38:16.306060   42937 logs.go:276] 0 containers: []
	W0404 22:38:16.306072   42937 logs.go:278] No container was found matching "kindnet"
	I0404 22:38:16.306083   42937 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:38:16.306097   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:38:16.443479   42937 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:38:16.443505   42937 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:38:16.443520   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:38:16.562000   42937 logs.go:123] Gathering logs for container status ...
	I0404 22:38:16.562046   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:38:16.609288   42937 logs.go:123] Gathering logs for kubelet ...
	I0404 22:38:16.609321   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:38:16.673930   42937 logs.go:123] Gathering logs for dmesg ...
	I0404 22:38:16.673973   42937 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0404 22:38:16.693899   42937 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 22:38:16.693940   42937 out.go:239] * 
	* 
	W0404 22:38:16.693999   42937 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 22:38:16.694039   42937 out.go:239] * 
	* 
	W0404 22:38:16.695097   42937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:38:16.698712   42937 out.go:177] 
	W0404 22:38:16.700179   42937 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 22:38:16.700248   42937 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 22:38:16.700277   42937 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 22:38:16.701944   42937 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-013199
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-013199: (2.568323033s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-013199 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-013199 status --format={{.Host}}: exit status 7 (88.169738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.727256877s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-013199 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (110.655927ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-013199] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-013199
	    minikube start -p kubernetes-upgrade-013199 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0131992 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-013199 --kubernetes-version=v1.30.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-013199 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.798193706s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-04 22:39:39.148013224 +0000 UTC m=+4235.010998934
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-013199 -n kubernetes-upgrade-013199
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-013199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-013199 logs -n 25: (1.82156001s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-048599          | force-systemd-flag-048599 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:34 UTC | 04 Apr 24 22:34 UTC |
	| start   | -p cert-options-754073                | cert-options-754073       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:34 UTC | 04 Apr 24 22:36 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p offline-crio-035370                | offline-crio-035370       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:35 UTC | 04 Apr 24 22:35 UTC |
	| start   | -p force-systemd-env-436667           | force-systemd-env-436667  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:35 UTC | 04 Apr 24 22:36 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | cert-options-754073 ssh               | cert-options-754073       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:36 UTC | 04 Apr 24 22:36 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-754073 -- sudo        | cert-options-754073       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:36 UTC | 04 Apr 24 22:36 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-754073                | cert-options-754073       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:36 UTC | 04 Apr 24 22:36 UTC |
	| start   | -p stopped-upgrade-654429             | minikube                  | jenkins | v1.26.0        | 04 Apr 24 22:36 UTC | 04 Apr 24 22:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-436667           | force-systemd-env-436667  | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:36 UTC | 04 Apr 24 22:36 UTC |
	| start   | -p running-upgrade-590730             | minikube                  | jenkins | v1.26.0        | 04 Apr 24 22:36 UTC | 04 Apr 24 22:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-654429 stop           | minikube                  | jenkins | v1.26.0        | 04 Apr 24 22:37 UTC | 04 Apr 24 22:37 UTC |
	| start   | -p stopped-upgrade-654429             | stopped-upgrade-654429    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:37 UTC | 04 Apr 24 22:38 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p running-upgrade-590730             | running-upgrade-590730    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:37 UTC | 04 Apr 24 22:39 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-013199          | kubernetes-upgrade-013199 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC | 04 Apr 24 22:38 UTC |
	| start   | -p kubernetes-upgrade-013199          | kubernetes-upgrade-013199 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC | 04 Apr 24 22:38 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0     |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p stopped-upgrade-654429             | stopped-upgrade-654429    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC | 04 Apr 24 22:38 UTC |
	| start   | -p pause-661005 --memory=2048         | pause-661005              | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p cert-expiration-086102             | cert-expiration-086102    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC | 04 Apr 24 22:39 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-013199          | kubernetes-upgrade-013199 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-013199          | kubernetes-upgrade-013199 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:38 UTC | 04 Apr 24 22:39 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0     |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-590730             | running-upgrade-590730    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:39 UTC | 04 Apr 24 22:39 UTC |
	| start   | -p NoKubernetes-450559                | NoKubernetes-450559       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:39 UTC |                     |
	|         | --no-kubernetes                       |                           |         |                |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p NoKubernetes-450559                | NoKubernetes-450559       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:39 UTC |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-086102             | cert-expiration-086102    | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:39 UTC | 04 Apr 24 22:39 UTC |
	| start   | -p auto-063570 --memory=3072          | auto-063570               | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:39 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:39:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:39:28.122731   49220 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:39:28.124466   49220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:39:28.124487   49220 out.go:304] Setting ErrFile to fd 2...
	I0404 22:39:28.124504   49220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:39:28.125400   49220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:39:28.126272   49220 out.go:298] Setting JSON to false
	I0404 22:39:28.127495   49220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4914,"bootTime":1712265455,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:39:28.127583   49220 start.go:139] virtualization: kvm guest
	I0404 22:39:28.129762   49220 out.go:177] * [auto-063570] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:39:28.131604   49220 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:39:28.131607   49220 notify.go:220] Checking for updates...
	I0404 22:39:28.133073   49220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:39:28.134560   49220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:39:28.135884   49220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:39:28.137402   49220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:39:28.138827   49220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:39:28.140694   49220 config.go:182] Loaded profile config "NoKubernetes-450559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:39:28.140856   49220 config.go:182] Loaded profile config "kubernetes-upgrade-013199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:39:28.140947   49220 config.go:182] Loaded profile config "pause-661005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:39:28.141039   49220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:39:28.185488   49220 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 22:39:28.187201   49220 start.go:297] selected driver: kvm2
	I0404 22:39:28.187220   49220 start.go:901] validating driver "kvm2" against <nil>
	I0404 22:39:28.187235   49220 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:39:28.188399   49220 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:39:28.188505   49220 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:39:28.208500   49220 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:39:28.208560   49220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 22:39:28.208838   49220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:39:28.208906   49220 cni.go:84] Creating CNI manager for ""
	I0404 22:39:28.208929   49220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:39:28.208941   49220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 22:39:28.209023   49220 start.go:340] cluster config:
	{Name:auto-063570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-063570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:39:28.209139   49220 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:39:28.211334   49220 out.go:177] * Starting "auto-063570" primary control-plane node in "auto-063570" cluster
	I0404 22:39:24.598341   48399 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:39:24.696131   48399 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:39:24.795989   48399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:39:24.796145   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:24.796277   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-661005 minikube.k8s.io/updated_at=2024_04_04T22_39_24_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=pause-661005 minikube.k8s.io/primary=true
	I0404 22:39:25.285684   48399 ops.go:34] apiserver oom_adj: -16
	I0404 22:39:25.285815   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:25.786611   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:26.285866   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:26.785906   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:27.286831   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:27.786785   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:28.286440   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:28.786117   48399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 22:39:27.433032   48732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kubernetes-upgrade-013199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:39:27.588592   48732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:39:27.641238   48732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:39:27.682091   48732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:39:27.720596   48732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:39:27.745103   48732 ssh_runner.go:195] Run: openssl version
	I0404 22:39:27.752492   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:39:27.767474   48732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:39:27.773253   48732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:39:27.773331   48732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:39:27.781773   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:39:27.799190   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:39:27.816854   48732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:39:27.823482   48732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:39:27.823547   48732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:39:27.831349   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:39:27.846545   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:39:27.884073   48732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:39:27.907718   48732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:39:27.907782   48732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:39:27.918510   48732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:39:27.937386   48732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:39:27.947257   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:39:27.957027   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:39:27.969087   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:39:27.975512   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:39:27.985075   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:39:27.994220   48732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:39:28.000401   48732 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-013199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-rc.0 ClusterName:kubernetes-upgrade-013199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:39:28.000507   48732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:39:28.000564   48732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:39:28.068956   48732 cri.go:89] found id: "959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568"
	I0404 22:39:28.068979   48732 cri.go:89] found id: "b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b"
	I0404 22:39:28.068985   48732 cri.go:89] found id: "21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0"
	I0404 22:39:28.068989   48732 cri.go:89] found id: "de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720"
	I0404 22:39:28.068993   48732 cri.go:89] found id: "9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24"
	I0404 22:39:28.069004   48732 cri.go:89] found id: "a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92"
	I0404 22:39:28.069008   48732 cri.go:89] found id: "087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df"
	I0404 22:39:28.069012   48732 cri.go:89] found id: "187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b"
	I0404 22:39:28.069016   48732 cri.go:89] found id: "54f390310ab36a1e9838d2ba58b0903a5ef98229f664206688d9843afdb98874"
	I0404 22:39:28.069025   48732 cri.go:89] found id: "e21af8414277a1908af908657da2736014ef22476e28bd718053d9e461fe81c9"
	I0404 22:39:28.069030   48732 cri.go:89] found id: "c4864de2792171d5bbb2b38a578db7eb9ba04167f8bd8ac41717d7dabcc9f2bd"
	I0404 22:39:28.069034   48732 cri.go:89] found id: "1913ce494e786277aa5f7cc6c69f58515516594568667559c1cfcba029568d24"
	I0404 22:39:28.069042   48732 cri.go:89] found id: "d73b1a6b3861091d300313ef7a146266a81118caacf4a2f2587fd6cd7a1b86a4"
	I0404 22:39:28.069046   48732 cri.go:89] found id: ""
	I0404 22:39:28.069095   48732 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.952737834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15c1aed7-92b8-463c-93a6-47ac3da3488d name=/runtime.v1.RuntimeService/Version
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.954379389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93d7762c-46e3-40a5-a2a9-6aed5509eee2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.955706460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712270379955670119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93d7762c-46e3-40a5-a2a9-6aed5509eee2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.957826535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8564fbc-ac34-4f39-b6c6-b262f3ae0303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.957924688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8564fbc-ac34-4f39-b6c6-b262f3ae0303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:39 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:39.962307747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f1e2bbf02553038b00295f928240ddf2e2f2e4c07dc4925be583a005a2cefe0,PodSandboxId:92570e197c003600fa1bc31544aacb154b1ae01e5f2c371565364365291a08f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376831184889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819d765b753c01ce116e123bfdd620248f91a2c4aaabb41405bc9a1c5a6078,PodSandboxId:c7ab4737e84d77dcb21f95967f02e786183b37e2db659056a1e2fb6217040e62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376703680311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feaab2f5d1febc554390adb07cd2e62f42be54c6521bc37a43cd519970e1e47d,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1712270376099638100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b759391ae25f60eef129382247150aaf8cf187bb83809a10afee85427f094ee,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,C
reatedAt:1712270376073925475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de413e8131af2dd30d15c5239ad7cc48257dc6ab8a5ef687d1c822cb0bb4e69e,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712270372263860937,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04fc45a759ce4a55516e1dd7363d51465a2306e169054d9d6cab73fdde2a925c,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712270372264835753,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e6cddec2abd5223a4609e3afa609d61dbba69532b0f7bf1a4384b7295ae8a77,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712270372243374398,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2963408f835d1fbb024d2fd433f2bc3c34157efc0596288ecccac2b1dcef87,PodSandboxId:dd3684054462ebe9ccc186663a218ebdc8c66959aa40325d745873d52067c511,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712270372248599076,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712270367085917681,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712270367163541304,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712270366928218159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,i
o.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712270366959746804,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712270366818901129,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92,PodSandboxId:42cbf9e24b8caccfb91d217a775b68f9ec9c1e03c72e84cc9efa80c641a9beb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712270365152247952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df,PodSandboxId:d2e6d3059a0051264d78ab975561c276c53cbc1447f48112c0225d594c95386e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351207657624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d
-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b,PodSandboxId:9acc5439dd0c687f5d60f2d67fb9730aee4d3b6a4dcdfd2af1205be96a6fbde7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351165598602,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8564fbc-ac34-4f39-b6c6-b262f3ae0303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.017274161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ba023af-b41f-4acc-a7c2-5bc4b0792a9f name=/runtime.v1.RuntimeService/Version
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.017379858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ba023af-b41f-4acc-a7c2-5bc4b0792a9f name=/runtime.v1.RuntimeService/Version
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.019553216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=848ad830-37ff-4f16-b9df-5a151c17829a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.020369447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712270380020324816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=848ad830-37ff-4f16-b9df-5a151c17829a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.021140774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c84fd576-13fd-4929-b2b2-307248b797be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.021215658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c84fd576-13fd-4929-b2b2-307248b797be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.021611096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f1e2bbf02553038b00295f928240ddf2e2f2e4c07dc4925be583a005a2cefe0,PodSandboxId:92570e197c003600fa1bc31544aacb154b1ae01e5f2c371565364365291a08f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376831184889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819d765b753c01ce116e123bfdd620248f91a2c4aaabb41405bc9a1c5a6078,PodSandboxId:c7ab4737e84d77dcb21f95967f02e786183b37e2db659056a1e2fb6217040e62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376703680311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feaab2f5d1febc554390adb07cd2e62f42be54c6521bc37a43cd519970e1e47d,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1712270376099638100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b759391ae25f60eef129382247150aaf8cf187bb83809a10afee85427f094ee,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,C
reatedAt:1712270376073925475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de413e8131af2dd30d15c5239ad7cc48257dc6ab8a5ef687d1c822cb0bb4e69e,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712270372263860937,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04fc45a759ce4a55516e1dd7363d51465a2306e169054d9d6cab73fdde2a925c,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712270372264835753,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e6cddec2abd5223a4609e3afa609d61dbba69532b0f7bf1a4384b7295ae8a77,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712270372243374398,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2963408f835d1fbb024d2fd433f2bc3c34157efc0596288ecccac2b1dcef87,PodSandboxId:dd3684054462ebe9ccc186663a218ebdc8c66959aa40325d745873d52067c511,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712270372248599076,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712270367085917681,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712270367163541304,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712270366928218159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,i
o.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712270366959746804,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712270366818901129,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92,PodSandboxId:42cbf9e24b8caccfb91d217a775b68f9ec9c1e03c72e84cc9efa80c641a9beb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712270365152247952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df,PodSandboxId:d2e6d3059a0051264d78ab975561c276c53cbc1447f48112c0225d594c95386e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351207657624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d
-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b,PodSandboxId:9acc5439dd0c687f5d60f2d67fb9730aee4d3b6a4dcdfd2af1205be96a6fbde7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351165598602,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c84fd576-13fd-4929-b2b2-307248b797be name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.044228440Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d46df200-2bdd-4caa-a63b-197d6ab4c5a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.044527388Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:92570e197c003600fa1bc31544aacb154b1ae01e5f2c371565364365291a08f6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzdb,Uid:b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712270376058927860,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:39:35.730922083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7ab4737e84d77dcb21f95967f02e786183b37e2db659056a1e2fb6217040e62,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzv45,Uid:47073805-9882-4115-83d8-47fcdc1e29c3,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712270376053468856,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:39:35.730920868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-013199,Uid:e90353c28d30068294bd5baefd1c42b2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712270366494174514,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,tier: control-plane,},Ann
otations:map[string]string{kubernetes.io/config.hash: e90353c28d30068294bd5baefd1c42b2,kubernetes.io/config.seen: 2024-04-04T22:38:48.937934853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd3684054462ebe9ccc186663a218ebdc8c66959aa40325d745873d52067c511,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-013199,Uid:cd6d3627160b388378096c061af0090a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712270366476653832,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.229:8443,kubernetes.io/config.hash: cd6d3627160b388378096c061af0090a,kubernetes.io/config.seen: 2024-04-04T22:38:48.932453894Z,kubernetes.io/config.source: file,},RuntimeHand
ler:,},&PodSandbox{Id:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-013199,Uid:79078c32a4a207279be08fc1bbe02182,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712270366458091948,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.229:2379,kubernetes.io/config.hash: 79078c32a4a207279be08fc1bbe02182,kubernetes.io/config.seen: 2024-04-04T22:38:49.035866836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d6805b06-4a7d-44fa-9994-49cf26004ac3,Namespace:kube-system,Attempt:2,},State:SANDBOX
_READY,CreatedAt:1712270366455910575,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"typ
e\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-04T22:39:09.538718530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&PodSandboxMetadata{Name:kube-proxy-5zf2s,Uid:cddbc535-dbe1-4406-b804-a8c891b090b8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712270366451971733,Labels:map[string]string{controller-revision-hash: 97c89d47,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:39:10.195900342Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-013199,Uid:181f5be8d7eeb28d9c727
8faf19da0cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712270366401085100,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 181f5be8d7eeb28d9c7278faf19da0cf,kubernetes.io/config.seen: 2024-04-04T22:38:48.936219732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42cbf9e24b8caccfb91d217a775b68f9ec9c1e03c72e84cc9efa80c641a9beb9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-013199,Uid:cd6d3627160b388378096c061af0090a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1712270364323220839,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.229:8443,kubernetes.io/config.hash: cd6d3627160b388378096c061af0090a,kubernetes.io/config.seen: 2024-04-04T22:38:48.932453894Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9acc5439dd0c687f5d60f2d67fb9730aee4d3b6a4dcdfd2af1205be96a6fbde7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzdb,Uid:b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712270350701739351,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:39:10.390663227Z,kubernetes.io/
config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2e6d3059a0051264d78ab975561c276c53cbc1447f48112c0225d594c95386e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzv45,Uid:47073805-9882-4115-83d8-47fcdc1e29c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712270350651352991,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:39:10.340597273Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d46df200-2bdd-4caa-a63b-197d6ab4c5a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.045898418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f49392f-4e0f-4266-9a13-873da8cd1fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.045986807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f49392f-4e0f-4266-9a13-873da8cd1fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.046375624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f1e2bbf02553038b00295f928240ddf2e2f2e4c07dc4925be583a005a2cefe0,PodSandboxId:92570e197c003600fa1bc31544aacb154b1ae01e5f2c371565364365291a08f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376831184889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819d765b753c01ce116e123bfdd620248f91a2c4aaabb41405bc9a1c5a6078,PodSandboxId:c7ab4737e84d77dcb21f95967f02e786183b37e2db659056a1e2fb6217040e62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376703680311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feaab2f5d1febc554390adb07cd2e62f42be54c6521bc37a43cd519970e1e47d,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1712270376099638100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b759391ae25f60eef129382247150aaf8cf187bb83809a10afee85427f094ee,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,C
reatedAt:1712270376073925475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de413e8131af2dd30d15c5239ad7cc48257dc6ab8a5ef687d1c822cb0bb4e69e,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712270372263860937,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04fc45a759ce4a55516e1dd7363d51465a2306e169054d9d6cab73fdde2a925c,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712270372264835753,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e6cddec2abd5223a4609e3afa609d61dbba69532b0f7bf1a4384b7295ae8a77,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712270372243374398,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2963408f835d1fbb024d2fd433f2bc3c34157efc0596288ecccac2b1dcef87,PodSandboxId:dd3684054462ebe9ccc186663a218ebdc8c66959aa40325d745873d52067c511,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712270372248599076,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712270367085917681,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712270367163541304,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712270366928218159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,i
o.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712270366959746804,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712270366818901129,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92,PodSandboxId:42cbf9e24b8caccfb91d217a775b68f9ec9c1e03c72e84cc9efa80c641a9beb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712270365152247952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df,PodSandboxId:d2e6d3059a0051264d78ab975561c276c53cbc1447f48112c0225d594c95386e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351207657624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d
-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b,PodSandboxId:9acc5439dd0c687f5d60f2d67fb9730aee4d3b6a4dcdfd2af1205be96a6fbde7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351165598602,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f49392f-4e0f-4266-9a13-873da8cd1fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.067864039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd24c81c-512b-45c5-9257-ae9dc762f2db name=/runtime.v1.RuntimeService/Version
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.067950691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd24c81c-512b-45c5-9257-ae9dc762f2db name=/runtime.v1.RuntimeService/Version
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.069613450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=729aaa58-967b-4567-b981-a26fd13313cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.070188897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712270380070156296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=729aaa58-967b-4567-b981-a26fd13313cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.070838734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d18e266-06e0-4212-8c49-f0f1e1b81143 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.070890649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d18e266-06e0-4212-8c49-f0f1e1b81143 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 22:39:40 kubernetes-upgrade-013199 crio[2641]: time="2024-04-04 22:39:40.071421771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f1e2bbf02553038b00295f928240ddf2e2f2e4c07dc4925be583a005a2cefe0,PodSandboxId:92570e197c003600fa1bc31544aacb154b1ae01e5f2c371565364365291a08f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376831184889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819d765b753c01ce116e123bfdd620248f91a2c4aaabb41405bc9a1c5a6078,PodSandboxId:c7ab4737e84d77dcb21f95967f02e786183b37e2db659056a1e2fb6217040e62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712270376703680311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzv45,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feaab2f5d1febc554390adb07cd2e62f42be54c6521bc37a43cd519970e1e47d,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1712270376099638100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b759391ae25f60eef129382247150aaf8cf187bb83809a10afee85427f094ee,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,C
reatedAt:1712270376073925475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de413e8131af2dd30d15c5239ad7cc48257dc6ab8a5ef687d1c822cb0bb4e69e,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712270372263860937,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04fc45a759ce4a55516e1dd7363d51465a2306e169054d9d6cab73fdde2a925c,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712270372264835753,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e6cddec2abd5223a4609e3afa609d61dbba69532b0f7bf1a4384b7295ae8a77,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712270372243374398,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2963408f835d1fbb024d2fd433f2bc3c34157efc0596288ecccac2b1dcef87,PodSandboxId:dd3684054462ebe9ccc186663a218ebdc8c66959aa40325d745873d52067c511,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712270372248599076,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b,PodSandboxId:ffba6034c0fd6f4efd9dd2402c0cc2e2cdb40d8ec207f06f40433530e52c8628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712270367085917681,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5zf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddbc535-dbe1-4406-b804-a8c891b090b8,},Annotations:map[string]string{io.kubernetes.container.hash: 421fd7f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568,PodSandboxId:f4ff5d95d4f4a353f40a8954cc8b6bead9d4e99ff56fc3c4d3099ae37b25894f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712270367163541304,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6805b06-4a7d-44fa-9994-49cf26004ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 6df88a00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720,PodSandboxId:916786a5cb71cd2a93714be32fd8c647aca155077a583e0934e322f7b321c77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712270366928218159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,i
o.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e90353c28d30068294bd5baefd1c42b2,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0,PodSandboxId:38941d3e0dd108b48fc20ce1e1ab1718c5717ac7db64509d3e4f17fc6005ac4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712270366959746804,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79078c32a4a207279be08fc1bbe02182,},Annotations:map[string]string{io.kubernetes.container.hash: bc994e01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24,PodSandboxId:c20ab05994f147de55539bbb8016be8ee6eaedaadd7f3247a0b68d0a3ceaa31b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712270366818901129,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181f5be8d7eeb28d9c7278faf19da0cf,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92,PodSandboxId:42cbf9e24b8caccfb91d217a775b68f9ec9c1e03c72e84cc9efa80c641a9beb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712270365152247952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-kubernetes-upgrade-013199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d3627160b388378096c061af0090a,},Annotations:map[string]string{io.kubernetes.container.hash: c108af44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df,PodSandboxId:d2e6d3059a0051264d78ab975561c276c53cbc1447f48112c0225d594c95386e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351207657624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d
-fzv45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47073805-9882-4115-83d8-47fcdc1e29c3,},Annotations:map[string]string{io.kubernetes.container.hash: ea92e5e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b,PodSandboxId:9acc5439dd0c687f5d60f2d67fb9730aee4d3b6a4dcdfd2af1205be96a6fbde7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712270351165598602,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzzdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1cf7004-2245-4e88-8eb8-5d83f4f8eee7,},Annotations:map[string]string{io.kubernetes.container.hash: c7dce45a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d18e266-06e0-4212-8c49-f0f1e1b81143 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f1e2bbf02553       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   1                   92570e197c003       coredns-7db6d8ff4d-hzzdb
	b2819d765b753       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   1                   c7ab4737e84d7       coredns-7db6d8ff4d-fzv45
	feaab2f5d1feb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   f4ff5d95d4f4a       storage-provisioner
	0b759391ae25f       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   4 seconds ago       Running             kube-proxy                2                   ffba6034c0fd6       kube-proxy-5zf2s
	04fc45a759ce4       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   7 seconds ago       Running             kube-scheduler            2                   916786a5cb71c       kube-scheduler-kubernetes-upgrade-013199
	de413e8131af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   38941d3e0dd10       etcd-kubernetes-upgrade-013199
	0b2963408f835       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   7 seconds ago       Running             kube-apiserver            2                   dd3684054462e       kube-apiserver-kubernetes-upgrade-013199
	2e6cddec2abd5       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   7 seconds ago       Running             kube-controller-manager   2                   c20ab05994f14       kube-controller-manager-kubernetes-upgrade-013199
	959e30cbb4e42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   f4ff5d95d4f4a       storage-provisioner
	b1c2bec8083ec       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   13 seconds ago      Exited              kube-proxy                1                   ffba6034c0fd6       kube-proxy-5zf2s
	21d8efa13a17b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   13 seconds ago      Exited              etcd                      1                   38941d3e0dd10       etcd-kubernetes-upgrade-013199
	de71bbab62007       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   13 seconds ago      Exited              kube-scheduler            1                   916786a5cb71c       kube-scheduler-kubernetes-upgrade-013199
	9450e0795652e       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   13 seconds ago      Exited              kube-controller-manager   1                   c20ab05994f14       kube-controller-manager-kubernetes-upgrade-013199
	a47fabda9bd17       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   15 seconds ago      Exited              kube-apiserver            1                   42cbf9e24b8ca       kube-apiserver-kubernetes-upgrade-013199
	087ca71dfc08a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago      Exited              coredns                   0                   d2e6d3059a005       coredns-7db6d8ff4d-fzv45
	187a645a407a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago      Exited              coredns                   0                   9acc5439dd0c6       coredns-7db6d8ff4d-hzzdb
	
	
	==> coredns [087ca71dfc08a481653f8c4f7811c63068d7d15faf7b2a6de31cda361b8c06df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [187a645a407a898fc3a36083f8190366367ce022ce4ad8d237bf0155251c562b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3f1e2bbf02553038b00295f928240ddf2e2f2e4c07dc4925be583a005a2cefe0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b2819d765b753c01ce116e123bfdd620248f91a2c4aaabb41405bc9a1c5a6078] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-013199
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-013199
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:38:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-013199
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 22:39:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 22:39:35 +0000   Thu, 04 Apr 2024 22:38:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 22:39:35 +0000   Thu, 04 Apr 2024 22:38:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 22:39:35 +0000   Thu, 04 Apr 2024 22:38:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 22:39:35 +0000   Thu, 04 Apr 2024 22:38:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    kubernetes-upgrade-013199
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9794232f96ef41ad802675b7bfa2cf99
	  System UUID:                9794232f-96ef-41ad-8026-75b7bfa2cf99
	  Boot ID:                    1b7e30da-f504-43fd-9fb9-dfb1e73ceca0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fzv45                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 coredns-7db6d8ff4d-hzzdb                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 etcd-kubernetes-upgrade-013199                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kube-apiserver-kubernetes-upgrade-013199             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-013199    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-proxy-5zf2s                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-013199             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)  kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x7 over 52s)  kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)  kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-013199 event: Registered Node kubernetes-upgrade-013199 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-013199 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.733760] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.070431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.083516] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.204470] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.136336] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.314175] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.902226] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.073190] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.159543] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +7.666225] systemd-fstab-generator[1264]: Ignoring "noauto" option for root device
	[  +0.103043] kauditd_printk_skb: 97 callbacks suppressed
	[Apr 4 22:39] kauditd_printk_skb: 18 callbacks suppressed
	[ +20.966229] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.088437] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.061616] systemd-fstab-generator[2217]: Ignoring "noauto" option for root device
	[  +0.197977] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +0.164295] systemd-fstab-generator[2243]: Ignoring "noauto" option for root device
	[  +0.775128] systemd-fstab-generator[2392]: Ignoring "noauto" option for root device
	[  +1.693930] systemd-fstab-generator[2761]: Ignoring "noauto" option for root device
	[  +2.182014] kauditd_printk_skb: 229 callbacks suppressed
	[  +3.017621] systemd-fstab-generator[3357]: Ignoring "noauto" option for root device
	[  +4.625051] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.744854] systemd-fstab-generator[4079]: Ignoring "noauto" option for root device
	
	
	==> etcd [21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0] <==
	{"level":"info","ts":"2024-04-04T22:39:27.688643Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"52.019092ms"}
	{"level":"info","ts":"2024-04-04T22:39:27.715825Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-04T22:39:27.749185Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","commit-index":396}
	{"level":"info","ts":"2024-04-04T22:39:27.756455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-04T22:39:27.7568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became follower at term 2"}
	{"level":"info","ts":"2024-04-04T22:39:27.756874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b8647f2870156d71 [peers: [], term: 2, commit: 396, applied: 0, lastindex: 396, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-04T22:39:27.77615Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-04T22:39:27.797404Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":386}
	{"level":"info","ts":"2024-04-04T22:39:27.802222Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-04T22:39:27.806194Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b8647f2870156d71","timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:39:27.806771Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b8647f2870156d71"}
	{"level":"info","ts":"2024-04-04T22:39:27.806855Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b8647f2870156d71","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-04T22:39:27.807401Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-04T22:39:27.808156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=(13286884612305677681)"}
	{"level":"info","ts":"2024-04-04T22:39:27.808264Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2024-04-04T22:39:27.808384Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:39:27.808435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:39:27.81045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:27.81053Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:27.81056Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:27.818788Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T22:39:27.818856Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-04-04T22:39:27.81898Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-04-04T22:39:27.825205Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T22:39:27.825265Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [de413e8131af2dd30d15c5239ad7cc48257dc6ab8a5ef687d1c822cb0bb4e69e] <==
	{"level":"info","ts":"2024-04-04T22:39:32.712957Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:39:32.713053Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T22:39:32.730177Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b8647f2870156d71","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-04T22:39:32.730381Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:32.730431Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:32.730443Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-04T22:39:32.739574Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T22:39:32.739797Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T22:39:32.739844Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-04T22:39:32.739985Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-04-04T22:39:32.740085Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-04-04T22:39:33.575121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-04T22:39:33.575181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-04T22:39:33.575218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2024-04-04T22:39:33.575233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2024-04-04T22:39:33.57524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-04-04T22:39:33.575269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2024-04-04T22:39:33.575307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-04-04T22:39:33.58851Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:kubernetes-upgrade-013199 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:39:33.588701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:39:33.588877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:39:33.593104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:39:33.593177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:39:33.602556Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2024-04-04T22:39:33.609538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:39:40 up 1 min,  0 users,  load average: 2.41, 0.68, 0.24
	Linux kubernetes-upgrade-013199 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b2963408f835d1fbb024d2fd433f2bc3c34157efc0596288ecccac2b1dcef87] <==
	I0404 22:39:35.355081       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0404 22:39:35.436875       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0404 22:39:35.438194       1 policy_source.go:224] refreshing policies
	I0404 22:39:35.448569       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0404 22:39:35.450708       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0404 22:39:35.454322       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0404 22:39:35.454360       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0404 22:39:35.454611       1 shared_informer.go:320] Caches are synced for configmaps
	I0404 22:39:35.457942       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0404 22:39:35.458062       1 aggregator.go:165] initial CRD sync complete...
	I0404 22:39:35.458079       1 autoregister_controller.go:141] Starting autoregister controller
	I0404 22:39:35.458085       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0404 22:39:35.458089       1 cache.go:39] Caches are synced for autoregister controller
	E0404 22:39:35.469141       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0404 22:39:35.525214       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0404 22:39:35.552687       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0404 22:39:35.552814       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0404 22:39:35.560665       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0404 22:39:36.359424       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0404 22:39:36.470957       1 controller.go:615] quota admission added evaluator for: endpoints
	I0404 22:39:37.551702       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0404 22:39:37.578263       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0404 22:39:37.630991       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0404 22:39:37.722481       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0404 22:39:37.731886       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92] <==
	
	
	==> kube-controller-manager [2e6cddec2abd5223a4609e3afa609d61dbba69532b0f7bf1a4384b7295ae8a77] <==
	I0404 22:39:38.058768       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0404 22:39:38.058779       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0404 22:39:38.106935       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0404 22:39:38.107445       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0404 22:39:38.107463       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0404 22:39:38.157506       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0404 22:39:38.157784       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0404 22:39:38.157799       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0404 22:39:38.206178       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0404 22:39:38.206397       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0404 22:39:38.206429       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0404 22:39:38.255354       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0404 22:39:38.255451       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0404 22:39:38.255464       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0404 22:39:38.310118       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0404 22:39:38.310198       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0404 22:39:38.310208       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0404 22:39:38.355896       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0404 22:39:38.355963       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0404 22:39:38.355985       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0404 22:39:38.356087       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0404 22:39:38.462046       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0404 22:39:38.462141       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0404 22:39:38.462203       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0404 22:39:38.462235       1 shared_informer.go:313] Waiting for caches to sync for disruption
	
	
	==> kube-controller-manager [9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24] <==
	I0404 22:39:28.240505       1 serving.go:380] Generated self-signed cert in-memory
	I0404 22:39:28.814301       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.0"
	I0404 22:39:28.814387       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:39:28.816252       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0404 22:39:28.816393       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0404 22:39:28.816882       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0404 22:39:28.816984       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [0b759391ae25f60eef129382247150aaf8cf187bb83809a10afee85427f094ee] <==
	I0404 22:39:36.419101       1 server_linux.go:69] "Using iptables proxy"
	I0404 22:39:36.498805       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	I0404 22:39:36.616470       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0404 22:39:36.616554       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:39:36.616578       1 server_linux.go:165] "Using iptables Proxier"
	I0404 22:39:36.625287       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:39:36.625546       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0404 22:39:36.625595       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:39:36.627546       1 config.go:192] "Starting service config controller"
	I0404 22:39:36.627591       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0404 22:39:36.627625       1 config.go:101] "Starting endpoint slice config controller"
	I0404 22:39:36.627631       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0404 22:39:36.628319       1 config.go:319] "Starting node config controller"
	I0404 22:39:36.628355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0404 22:39:36.728722       1 shared_informer.go:320] Caches are synced for node config
	I0404 22:39:36.728885       1 shared_informer.go:320] Caches are synced for service config
	I0404 22:39:36.729212       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b] <==
	
	
	==> kube-scheduler [04fc45a759ce4a55516e1dd7363d51465a2306e169054d9d6cab73fdde2a925c] <==
	I0404 22:39:33.250832       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:39:35.412145       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:39:35.412193       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:39:35.412204       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:39:35.412210       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:39:35.455540       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0404 22:39:35.455720       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:39:35.462305       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:39:35.463119       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:39:35.463767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:39:35.463849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:39:35.563425       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720] <==
	I0404 22:39:28.834163       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:39:29.298231       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.229:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.229:8443: connect: connection refused
	W0404 22:39:29.298344       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:39:29.298378       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:39:29.302432       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0404 22:39:29.302508       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:39:29.304411       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:39:29.304453       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0404 22:39:29.304472       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:39:29.304480       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:39:29.305079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:39:29.305146       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:39:29.305178       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0404 22:39:29.305294       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0404 22:39:29.305478       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0404 22:39:29.305482       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:32.216906    3364 scope.go:117] "RemoveContainer" containerID="21d8efa13a17bb85e8f2a8648067123f15ddc08d54f929dbfe82a2953bc028c0"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:32.219700    3364 scope.go:117] "RemoveContainer" containerID="a47fabda9bd173dd7e2ae9863a6c61a11c3a9ac3121a827874ecbf8bdb709d92"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:32.220081    3364 scope.go:117] "RemoveContainer" containerID="9450e0795652ebe947b8d9b42cd98fc035f8f45cf8cdc3a910a7d5972f2e5a24"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:32.224130    3364 scope.go:117] "RemoveContainer" containerID="de71bbab620071880b089ffd95f1aef64e2e6877dbe1d9d7870ec1caa13c5720"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: E0404 22:39:32.344335    3364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-013199?timeout=10s\": dial tcp 192.168.39.229:8443: connect: connection refused" interval="800ms"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:32.467443    3364 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-013199"
	Apr 04 22:39:32 kubernetes-upgrade-013199 kubelet[3364]: E0404 22:39:32.470634    3364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.229:8443: connect: connection refused" node="kubernetes-upgrade-013199"
	Apr 04 22:39:33 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:33.273083    3364 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-013199"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.505846    3364 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-013199"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.506446    3364 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-013199"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.508070    3364 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.509490    3364 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.727339    3364 apiserver.go:52] "Watching apiserver"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.731183    3364 topology_manager.go:215] "Topology Admit Handler" podUID="d6805b06-4a7d-44fa-9994-49cf26004ac3" podNamespace="kube-system" podName="storage-provisioner"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.731332    3364 topology_manager.go:215] "Topology Admit Handler" podUID="47073805-9882-4115-83d8-47fcdc1e29c3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fzv45"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.731375    3364 topology_manager.go:215] "Topology Admit Handler" podUID="b1cf7004-2245-4e88-8eb8-5d83f4f8eee7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hzzdb"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.731428    3364 topology_manager.go:215] "Topology Admit Handler" podUID="cddbc535-dbe1-4406-b804-a8c891b090b8" podNamespace="kube-system" podName="kube-proxy-5zf2s"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.737414    3364 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.830554    3364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cddbc535-dbe1-4406-b804-a8c891b090b8-lib-modules\") pod \"kube-proxy-5zf2s\" (UID: \"cddbc535-dbe1-4406-b804-a8c891b090b8\") " pod="kube-system/kube-proxy-5zf2s"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.830990    3364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cddbc535-dbe1-4406-b804-a8c891b090b8-xtables-lock\") pod \"kube-proxy-5zf2s\" (UID: \"cddbc535-dbe1-4406-b804-a8c891b090b8\") " pod="kube-system/kube-proxy-5zf2s"
	Apr 04 22:39:35 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:35.831230    3364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d6805b06-4a7d-44fa-9994-49cf26004ac3-tmp\") pod \"storage-provisioner\" (UID: \"d6805b06-4a7d-44fa-9994-49cf26004ac3\") " pod="kube-system/storage-provisioner"
	Apr 04 22:39:36 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:36.032983    3364 scope.go:117] "RemoveContainer" containerID="b1c2bec8083ec3f6912fe61c50266437321f572ef5cd136ac8299906558b384b"
	Apr 04 22:39:36 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:36.037737    3364 scope.go:117] "RemoveContainer" containerID="959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568"
	Apr 04 22:39:39 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:39.042392    3364 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 04 22:39:40 kubernetes-upgrade-013199 kubelet[3364]: I0404 22:39:40.976795    3364 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [959e30cbb4e426f8fe14c61ebfa8098aa48e31aa95998020ac5bbb44eb267568] <==
	I0404 22:39:27.799240       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0404 22:39:27.810198       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [feaab2f5d1febc554390adb07cd2e62f42be54c6521bc37a43cd519970e1e47d] <==
	I0404 22:39:36.399303       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 22:39:36.437548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 22:39:36.437625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 22:39:36.519526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 22:39:36.520723       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-013199_3ffa5e87-69ed-4695-bf12-aa632ae43636!
	I0404 22:39:36.520407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6ff0c9f-6ab4-4549-b9d8-d45f2f2eff4b", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-013199_3ffa5e87-69ed-4695-bf12-aa632ae43636 became leader
	I0404 22:39:36.621470       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-013199_3ffa5e87-69ed-4695-bf12-aa632ae43636!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:39:39.496921   49352 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16143-5297/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-013199 -n kubernetes-upgrade-013199
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-013199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-013199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-013199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-013199: (1.139537131s)
--- FAIL: TestKubernetesUpgrade (356.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.367422529s)

                                                
                                                
-- stdout --
	* [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:44:26.162434   58287 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:44:26.162604   58287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:44:26.162618   58287 out.go:304] Setting ErrFile to fd 2...
	I0404 22:44:26.162624   58287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:44:26.162932   58287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:44:26.163701   58287 out.go:298] Setting JSON to false
	I0404 22:44:26.165114   58287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5212,"bootTime":1712265455,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:44:26.165198   58287 start.go:139] virtualization: kvm guest
	I0404 22:44:26.168336   58287 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:44:26.170060   58287 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:44:26.170115   58287 notify.go:220] Checking for updates...
	I0404 22:44:26.173542   58287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:44:26.175049   58287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:44:26.176657   58287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:44:26.178244   58287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:44:26.179894   58287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:44:26.181808   58287 config.go:182] Loaded profile config "bridge-063570": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:44:26.181955   58287 config.go:182] Loaded profile config "custom-flannel-063570": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:44:26.182093   58287 config.go:182] Loaded profile config "flannel-063570": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:44:26.182218   58287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:44:26.224216   58287 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 22:44:26.225780   58287 start.go:297] selected driver: kvm2
	I0404 22:44:26.225797   58287 start.go:901] validating driver "kvm2" against <nil>
	I0404 22:44:26.225812   58287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:44:26.226611   58287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:44:26.226692   58287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:44:26.244576   58287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:44:26.244643   58287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 22:44:26.244931   58287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:44:26.245024   58287 cni.go:84] Creating CNI manager for ""
	I0404 22:44:26.245043   58287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:44:26.245054   58287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 22:44:26.245133   58287 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:44:26.245267   58287 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:44:26.247193   58287 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:44:26.248596   58287 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:44:26.248642   58287 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:44:26.248666   58287 cache.go:56] Caching tarball of preloaded images
	I0404 22:44:26.248777   58287 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:44:26.248800   58287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:44:26.248909   58287 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:44:26.248932   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json: {Name:mke1c953d3439c56bfe51c5322d98b54f0a71606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:44:26.249094   58287 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:44:26.249135   58287 start.go:364] duration metric: took 20.098µs to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:44:26.249159   58287 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:44:26.249249   58287 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 22:44:26.250993   58287 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 22:44:26.251194   58287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:44:26.251232   58287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:44:26.268995   58287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0404 22:44:26.269472   58287 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:44:26.270090   58287 main.go:141] libmachine: Using API Version  1
	I0404 22:44:26.270111   58287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:44:26.270495   58287 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:44:26.270698   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:44:26.270883   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:26.271055   58287 start.go:159] libmachine.API.Create for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:44:26.271080   58287 client.go:168] LocalClient.Create starting
	I0404 22:44:26.271114   58287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 22:44:26.271149   58287 main.go:141] libmachine: Decoding PEM data...
	I0404 22:44:26.271166   58287 main.go:141] libmachine: Parsing certificate...
	I0404 22:44:26.271219   58287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 22:44:26.271236   58287 main.go:141] libmachine: Decoding PEM data...
	I0404 22:44:26.271251   58287 main.go:141] libmachine: Parsing certificate...
	I0404 22:44:26.271265   58287 main.go:141] libmachine: Running pre-create checks...
	I0404 22:44:26.271279   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .PreCreateCheck
	I0404 22:44:26.271672   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:44:26.272091   58287 main.go:141] libmachine: Creating machine...
	I0404 22:44:26.272103   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .Create
	I0404 22:44:26.272236   58287 main.go:141] libmachine: (old-k8s-version-343162) Creating KVM machine...
	I0404 22:44:26.273589   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found existing default KVM network
	I0404 22:44:26.275441   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:26.275302   58310 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0404 22:44:26.275484   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | created network xml: 
	I0404 22:44:26.275503   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | <network>
	I0404 22:44:26.275531   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   <name>mk-old-k8s-version-343162</name>
	I0404 22:44:26.275554   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   <dns enable='no'/>
	I0404 22:44:26.275567   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   
	I0404 22:44:26.275576   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 22:44:26.275582   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |     <dhcp>
	I0404 22:44:26.275599   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 22:44:26.275611   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |     </dhcp>
	I0404 22:44:26.275649   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   </ip>
	I0404 22:44:26.275655   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG |   
	I0404 22:44:26.275660   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | </network>
	I0404 22:44:26.275673   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | 
	I0404 22:44:26.282375   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | trying to create private KVM network mk-old-k8s-version-343162 192.168.39.0/24...
	I0404 22:44:26.361415   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | private KVM network mk-old-k8s-version-343162 192.168.39.0/24 created
	I0404 22:44:26.361503   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162 ...
	I0404 22:44:26.361523   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:26.361375   58310 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:44:26.361578   58287 main.go:141] libmachine: (old-k8s-version-343162) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 22:44:26.361620   58287 main.go:141] libmachine: (old-k8s-version-343162) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 22:44:26.612708   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:26.612568   58310 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa...
	I0404 22:44:26.955496   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:26.955340   58310 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/old-k8s-version-343162.rawdisk...
	I0404 22:44:26.955533   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Writing magic tar header
	I0404 22:44:26.955552   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Writing SSH key tar header
	I0404 22:44:26.955565   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:26.955506   58310 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162 ...
	I0404 22:44:26.955678   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162
	I0404 22:44:26.955712   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162 (perms=drwx------)
	I0404 22:44:26.955721   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 22:44:26.955741   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 22:44:26.955761   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 22:44:26.955785   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:44:26.955819   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 22:44:26.955835   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 22:44:26.955853   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 22:44:26.955866   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home/jenkins
	I0404 22:44:26.955891   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 22:44:26.955905   58287 main.go:141] libmachine: (old-k8s-version-343162) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 22:44:26.955921   58287 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:44:26.955932   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Checking permissions on dir: /home
	I0404 22:44:26.955939   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Skipping /home - not owner
	I0404 22:44:26.957029   58287 main.go:141] libmachine: (old-k8s-version-343162) define libvirt domain using xml: 
	I0404 22:44:26.957047   58287 main.go:141] libmachine: (old-k8s-version-343162) <domain type='kvm'>
	I0404 22:44:26.957065   58287 main.go:141] libmachine: (old-k8s-version-343162)   <name>old-k8s-version-343162</name>
	I0404 22:44:26.957071   58287 main.go:141] libmachine: (old-k8s-version-343162)   <memory unit='MiB'>2200</memory>
	I0404 22:44:26.957077   58287 main.go:141] libmachine: (old-k8s-version-343162)   <vcpu>2</vcpu>
	I0404 22:44:26.957082   58287 main.go:141] libmachine: (old-k8s-version-343162)   <features>
	I0404 22:44:26.957087   58287 main.go:141] libmachine: (old-k8s-version-343162)     <acpi/>
	I0404 22:44:26.957092   58287 main.go:141] libmachine: (old-k8s-version-343162)     <apic/>
	I0404 22:44:26.957097   58287 main.go:141] libmachine: (old-k8s-version-343162)     <pae/>
	I0404 22:44:26.957104   58287 main.go:141] libmachine: (old-k8s-version-343162)     
	I0404 22:44:26.957112   58287 main.go:141] libmachine: (old-k8s-version-343162)   </features>
	I0404 22:44:26.957117   58287 main.go:141] libmachine: (old-k8s-version-343162)   <cpu mode='host-passthrough'>
	I0404 22:44:26.957125   58287 main.go:141] libmachine: (old-k8s-version-343162)   
	I0404 22:44:26.957129   58287 main.go:141] libmachine: (old-k8s-version-343162)   </cpu>
	I0404 22:44:26.957135   58287 main.go:141] libmachine: (old-k8s-version-343162)   <os>
	I0404 22:44:26.957143   58287 main.go:141] libmachine: (old-k8s-version-343162)     <type>hvm</type>
	I0404 22:44:26.957152   58287 main.go:141] libmachine: (old-k8s-version-343162)     <boot dev='cdrom'/>
	I0404 22:44:26.957159   58287 main.go:141] libmachine: (old-k8s-version-343162)     <boot dev='hd'/>
	I0404 22:44:26.957165   58287 main.go:141] libmachine: (old-k8s-version-343162)     <bootmenu enable='no'/>
	I0404 22:44:26.957171   58287 main.go:141] libmachine: (old-k8s-version-343162)   </os>
	I0404 22:44:26.957176   58287 main.go:141] libmachine: (old-k8s-version-343162)   <devices>
	I0404 22:44:26.957182   58287 main.go:141] libmachine: (old-k8s-version-343162)     <disk type='file' device='cdrom'>
	I0404 22:44:26.957191   58287 main.go:141] libmachine: (old-k8s-version-343162)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/boot2docker.iso'/>
	I0404 22:44:26.957199   58287 main.go:141] libmachine: (old-k8s-version-343162)       <target dev='hdc' bus='scsi'/>
	I0404 22:44:26.957204   58287 main.go:141] libmachine: (old-k8s-version-343162)       <readonly/>
	I0404 22:44:26.957208   58287 main.go:141] libmachine: (old-k8s-version-343162)     </disk>
	I0404 22:44:26.957214   58287 main.go:141] libmachine: (old-k8s-version-343162)     <disk type='file' device='disk'>
	I0404 22:44:26.957221   58287 main.go:141] libmachine: (old-k8s-version-343162)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 22:44:26.957240   58287 main.go:141] libmachine: (old-k8s-version-343162)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/old-k8s-version-343162.rawdisk'/>
	I0404 22:44:26.957248   58287 main.go:141] libmachine: (old-k8s-version-343162)       <target dev='hda' bus='virtio'/>
	I0404 22:44:26.957253   58287 main.go:141] libmachine: (old-k8s-version-343162)     </disk>
	I0404 22:44:26.957267   58287 main.go:141] libmachine: (old-k8s-version-343162)     <interface type='network'>
	I0404 22:44:26.957273   58287 main.go:141] libmachine: (old-k8s-version-343162)       <source network='mk-old-k8s-version-343162'/>
	I0404 22:44:26.957278   58287 main.go:141] libmachine: (old-k8s-version-343162)       <model type='virtio'/>
	I0404 22:44:26.957283   58287 main.go:141] libmachine: (old-k8s-version-343162)     </interface>
	I0404 22:44:26.957289   58287 main.go:141] libmachine: (old-k8s-version-343162)     <interface type='network'>
	I0404 22:44:26.957295   58287 main.go:141] libmachine: (old-k8s-version-343162)       <source network='default'/>
	I0404 22:44:26.957300   58287 main.go:141] libmachine: (old-k8s-version-343162)       <model type='virtio'/>
	I0404 22:44:26.957310   58287 main.go:141] libmachine: (old-k8s-version-343162)     </interface>
	I0404 22:44:26.957321   58287 main.go:141] libmachine: (old-k8s-version-343162)     <serial type='pty'>
	I0404 22:44:26.957330   58287 main.go:141] libmachine: (old-k8s-version-343162)       <target port='0'/>
	I0404 22:44:26.957336   58287 main.go:141] libmachine: (old-k8s-version-343162)     </serial>
	I0404 22:44:26.957345   58287 main.go:141] libmachine: (old-k8s-version-343162)     <console type='pty'>
	I0404 22:44:26.957352   58287 main.go:141] libmachine: (old-k8s-version-343162)       <target type='serial' port='0'/>
	I0404 22:44:26.957359   58287 main.go:141] libmachine: (old-k8s-version-343162)     </console>
	I0404 22:44:26.957368   58287 main.go:141] libmachine: (old-k8s-version-343162)     <rng model='virtio'>
	I0404 22:44:26.957387   58287 main.go:141] libmachine: (old-k8s-version-343162)       <backend model='random'>/dev/random</backend>
	I0404 22:44:26.957399   58287 main.go:141] libmachine: (old-k8s-version-343162)     </rng>
	I0404 22:44:26.957412   58287 main.go:141] libmachine: (old-k8s-version-343162)     
	I0404 22:44:26.957423   58287 main.go:141] libmachine: (old-k8s-version-343162)     
	I0404 22:44:26.957433   58287 main.go:141] libmachine: (old-k8s-version-343162)   </devices>
	I0404 22:44:26.957442   58287 main.go:141] libmachine: (old-k8s-version-343162) </domain>
	I0404 22:44:26.957453   58287 main.go:141] libmachine: (old-k8s-version-343162) 
	I0404 22:44:26.961868   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:f2:c1:d9 in network default
	I0404 22:44:26.962525   58287 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:44:26.962549   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:26.963360   58287 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:44:26.963793   58287 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:44:26.964367   58287 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:44:26.965143   58287 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:44:28.292324   58287 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:44:28.293222   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:28.293794   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:28.293822   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:28.293753   58310 retry.go:31] will retry after 197.288115ms: waiting for machine to come up
	I0404 22:44:28.492976   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:28.493723   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:28.493753   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:28.493651   58310 retry.go:31] will retry after 389.88842ms: waiting for machine to come up
	I0404 22:44:28.885533   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:28.886100   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:28.886132   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:28.886041   58310 retry.go:31] will retry after 389.072072ms: waiting for machine to come up
	I0404 22:44:29.276452   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:29.276938   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:29.276969   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:29.276893   58310 retry.go:31] will retry after 569.231142ms: waiting for machine to come up
	I0404 22:44:29.847899   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:29.848532   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:29.848564   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:29.848482   58310 retry.go:31] will retry after 762.209539ms: waiting for machine to come up
	I0404 22:44:30.612667   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:30.613343   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:30.613365   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:30.613307   58310 retry.go:31] will retry after 852.84329ms: waiting for machine to come up
	I0404 22:44:31.467371   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:31.467952   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:31.467991   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:31.467899   58310 retry.go:31] will retry after 987.358603ms: waiting for machine to come up
	I0404 22:44:32.456717   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:32.457333   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:32.457361   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:32.457286   58310 retry.go:31] will retry after 1.27245072s: waiting for machine to come up
	I0404 22:44:33.732038   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:33.732580   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:33.732608   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:33.732532   58310 retry.go:31] will retry after 1.465115367s: waiting for machine to come up
	I0404 22:44:35.199102   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:35.199719   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:35.199745   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:35.199668   58310 retry.go:31] will retry after 1.464632196s: waiting for machine to come up
	I0404 22:44:36.665693   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:36.666236   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:36.666265   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:36.666171   58310 retry.go:31] will retry after 1.882272946s: waiting for machine to come up
	I0404 22:44:38.550414   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:38.551119   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:38.551155   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:38.551070   58310 retry.go:31] will retry after 3.231003855s: waiting for machine to come up
	I0404 22:44:41.784505   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:41.785174   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:41.785202   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:41.785144   58310 retry.go:31] will retry after 4.286636086s: waiting for machine to come up
	I0404 22:44:46.076469   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:46.076904   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:44:46.076932   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:44:46.076863   58310 retry.go:31] will retry after 3.579438214s: waiting for machine to come up
	I0404 22:44:50.193856   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.194507   58287 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:44:50.194552   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.194568   58287 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:44:50.194960   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162
	I0404 22:44:50.276597   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:44:50.276622   58287 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:44:50.276635   58287 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:44:50.279869   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.280416   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.280445   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.280607   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:44:50.280647   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:44:50.280687   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:44:50.280710   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:44:50.280729   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:44:50.409383   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:44:50.409948   58287 main.go:141] libmachine: (old-k8s-version-343162) KVM machine creation complete!
	I0404 22:44:50.410416   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:44:50.411089   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:50.411345   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:50.411520   58287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 22:44:50.411543   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:44:50.413044   58287 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 22:44:50.413062   58287 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 22:44:50.413071   58287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 22:44:50.413081   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:50.415745   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.416163   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.416183   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.416433   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:50.416600   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.416778   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.416985   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:50.417176   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:50.417383   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:50.417400   58287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 22:44:50.531667   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:44:50.531694   58287 main.go:141] libmachine: Detecting the provisioner...
	I0404 22:44:50.531704   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:50.534817   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.535301   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.535332   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.535533   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:50.535745   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.535966   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.536136   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:50.536393   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:50.536589   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:50.536605   58287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 22:44:50.645571   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 22:44:50.645647   58287 main.go:141] libmachine: found compatible host: buildroot
	I0404 22:44:50.645663   58287 main.go:141] libmachine: Provisioning with buildroot...
	I0404 22:44:50.645679   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:44:50.645906   58287 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:44:50.645927   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:44:50.646111   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:50.648920   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.649259   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.649286   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.649457   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:50.649654   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.649829   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.649994   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:50.650190   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:50.650447   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:50.650465   58287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:44:50.772679   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:44:50.772712   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:50.775821   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.776300   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.776330   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.776564   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:50.776778   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.776994   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:50.777180   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:50.777358   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:50.777526   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:50.777542   58287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:44:50.890164   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:44:50.890193   58287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:44:50.890211   58287 buildroot.go:174] setting up certificates
	I0404 22:44:50.890242   58287 provision.go:84] configureAuth start
	I0404 22:44:50.890250   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:44:50.890541   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:44:50.893634   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.894023   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.894043   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.894172   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:50.896832   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.897185   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:50.897213   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:50.897391   58287 provision.go:143] copyHostCerts
	I0404 22:44:50.897469   58287 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:44:50.897483   58287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:44:50.897569   58287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:44:50.897727   58287 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:44:50.897747   58287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:44:50.897795   58287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:44:50.897857   58287 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:44:50.897865   58287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:44:50.897888   58287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:44:50.897932   58287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:44:51.070090   58287 provision.go:177] copyRemoteCerts
	I0404 22:44:51.070144   58287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:44:51.070172   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.072821   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.073276   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.073304   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.073535   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.073709   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.073868   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.073991   58287 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:44:51.156211   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:44:51.185329   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:44:51.214358   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:44:51.242098   58287 provision.go:87] duration metric: took 351.844043ms to configureAuth
	I0404 22:44:51.242123   58287 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:44:51.242325   58287 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:44:51.242414   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.244937   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.245312   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.245345   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.245531   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.245712   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.245885   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.246024   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.246149   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:51.246359   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:51.246381   58287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:44:51.538334   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:44:51.538381   58287 main.go:141] libmachine: Checking connection to Docker...
	I0404 22:44:51.538392   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetURL
	I0404 22:44:51.540039   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using libvirt version 6000000
	I0404 22:44:51.542780   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.543207   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.543240   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.543383   58287 main.go:141] libmachine: Docker is up and running!
	I0404 22:44:51.543401   58287 main.go:141] libmachine: Reticulating splines...
	I0404 22:44:51.543408   58287 client.go:171] duration metric: took 25.272319482s to LocalClient.Create
	I0404 22:44:51.543445   58287 start.go:167] duration metric: took 25.272378054s to libmachine.API.Create "old-k8s-version-343162"
	I0404 22:44:51.543454   58287 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:44:51.543463   58287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:44:51.543480   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:51.543707   58287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:44:51.543729   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.546369   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.546743   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.546774   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.546967   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.547141   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.547319   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.547489   58287 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:44:51.631645   58287 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:44:51.636550   58287 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:44:51.636574   58287 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:44:51.636650   58287 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:44:51.636740   58287 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:44:51.636850   58287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:44:51.647161   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:44:51.673222   58287 start.go:296] duration metric: took 129.754595ms for postStartSetup
	I0404 22:44:51.673297   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:44:51.673910   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:44:51.676751   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.677149   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.677177   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.677414   58287 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:44:51.677612   58287 start.go:128] duration metric: took 25.428353737s to createHost
	I0404 22:44:51.677636   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.680271   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.680687   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.680720   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.680843   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.681044   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.681241   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.681390   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.681561   58287 main.go:141] libmachine: Using SSH client type: native
	I0404 22:44:51.681776   58287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:44:51.681812   58287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0404 22:44:51.790847   58287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712270691.758618761
	
	I0404 22:44:51.790871   58287 fix.go:216] guest clock: 1712270691.758618761
	I0404 22:44:51.790880   58287 fix.go:229] Guest: 2024-04-04 22:44:51.758618761 +0000 UTC Remote: 2024-04-04 22:44:51.677624378 +0000 UTC m=+25.572073026 (delta=80.994383ms)
	I0404 22:44:51.790905   58287 fix.go:200] guest clock delta is within tolerance: 80.994383ms
	I0404 22:44:51.790912   58287 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 25.541769997s
	I0404 22:44:51.790939   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:51.791242   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:44:51.794607   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.795057   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.795087   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.795268   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:51.795894   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:51.796076   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:44:51.796196   58287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:44:51.796236   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.796330   58287 ssh_runner.go:195] Run: cat /version.json
	I0404 22:44:51.796360   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:44:51.799260   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.799409   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.799764   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.799796   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:51.799816   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.799900   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:51.799999   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.800222   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:44:51.800218   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.800394   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:44:51.800437   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.800569   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:44:51.800650   58287 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:44:51.800743   58287 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:44:51.922205   58287 ssh_runner.go:195] Run: systemctl --version
	I0404 22:44:51.931730   58287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:44:52.110027   58287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:44:52.116906   58287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:44:52.116984   58287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:44:52.136040   58287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:44:52.136067   58287 start.go:494] detecting cgroup driver to use...
	I0404 22:44:52.136159   58287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:44:52.156087   58287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:44:52.174271   58287 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:44:52.174335   58287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:44:52.194532   58287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:44:52.211732   58287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:44:52.399861   58287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:44:52.623231   58287 docker.go:233] disabling docker service ...
	I0404 22:44:52.623294   58287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:44:52.645765   58287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:44:52.659725   58287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:44:52.851038   58287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:44:53.001607   58287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:44:53.020529   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:44:53.043936   58287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:44:53.044010   58287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:44:53.057385   58287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:44:53.057447   58287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:44:53.069948   58287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:44:53.084587   58287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:44:53.097985   58287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:44:53.112049   58287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:44:53.124320   58287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:44:53.124394   58287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:44:53.141451   58287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:44:53.155764   58287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:44:53.318539   58287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:44:53.493590   58287 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:44:53.493678   58287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:44:53.499310   58287 start.go:562] Will wait 60s for crictl version
	I0404 22:44:53.499370   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:53.503651   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:44:53.544514   58287 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:44:53.544608   58287 ssh_runner.go:195] Run: crio --version
	I0404 22:44:53.579226   58287 ssh_runner.go:195] Run: crio --version
	I0404 22:44:53.613681   58287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:44:53.615083   58287 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:44:53.618330   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:53.618717   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:44:43 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:44:53.618746   58287 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:44:53.618983   58287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:44:53.624537   58287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:44:53.640644   58287 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:44:53.640753   58287 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:44:53.640820   58287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:44:53.683261   58287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:44:53.683340   58287 ssh_runner.go:195] Run: which lz4
	I0404 22:44:53.688308   58287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0404 22:44:53.693294   58287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:44:53.693343   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:44:55.745286   58287 crio.go:462] duration metric: took 2.057007723s to copy over tarball
	I0404 22:44:55.745365   58287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:44:58.988075   58287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.242682056s)
	I0404 22:44:58.988103   58287 crio.go:469] duration metric: took 3.242785784s to extract the tarball
	I0404 22:44:58.988111   58287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:44:59.035097   58287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:44:59.099761   58287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:44:59.099783   58287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:44:59.099877   58287 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:44:59.099894   58287 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:44:59.099908   58287 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:44:59.099932   58287 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:44:59.099950   58287 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:44:59.099975   58287 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:44:59.100139   58287 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:44:59.099875   58287 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:44:59.101688   58287 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:44:59.101873   58287 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:44:59.101949   58287 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:44:59.102043   58287 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:44:59.102141   58287 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:44:59.102169   58287 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:44:59.102304   58287 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:44:59.102386   58287 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:44:59.294171   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:44:59.302608   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:44:59.325906   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:44:59.329741   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:44:59.332951   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:44:59.381348   58287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:44:59.381385   58287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:44:59.381429   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.397462   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:44:59.413188   58287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:44:59.413235   58287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:44:59.413293   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.468170   58287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:44:59.468215   58287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:44:59.468231   58287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:44:59.468257   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.468268   58287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:44:59.468304   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.477017   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:44:59.481135   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:44:59.481228   58287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:44:59.481265   58287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:44:59.481291   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.516921   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:44:59.516954   58287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:44:59.516999   58287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:44:59.517036   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.517045   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:44:59.517101   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:44:59.590140   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:44:59.590166   58287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:44:59.590237   58287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:44:59.590324   58287 ssh_runner.go:195] Run: which crictl
	I0404 22:44:59.590246   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:44:59.665717   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:44:59.674034   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:44:59.674139   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:44:59.674214   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:44:59.674220   58287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:44:59.674429   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:44:59.733490   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:44:59.733546   58287 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:44:59.985163   58287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:45:00.141855   58287 cache_images.go:92] duration metric: took 1.042048359s to LoadCachedImages
	W0404 22:45:00.141961   58287 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0404 22:45:00.141982   58287 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:45:00.142110   58287 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:45:00.142212   58287 ssh_runner.go:195] Run: crio config
	I0404 22:45:00.212094   58287 cni.go:84] Creating CNI manager for ""
	I0404 22:45:00.212138   58287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:45:00.212150   58287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:45:00.212175   58287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:45:00.212384   58287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:45:00.212449   58287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:45:00.226233   58287 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:45:00.226287   58287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:45:00.238043   58287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:45:00.260367   58287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:45:00.281077   58287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:45:00.301184   58287 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:45:00.305583   58287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:45:00.320452   58287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:45:00.469165   58287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:45:00.489549   58287 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:45:00.489581   58287 certs.go:194] generating shared ca certs ...
	I0404 22:45:00.489602   58287 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:00.489791   58287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:45:00.489847   58287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:45:00.489857   58287 certs.go:256] generating profile certs ...
	I0404 22:45:00.489925   58287 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:45:00.489938   58287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.crt with IP's: []
	I0404 22:45:00.600905   58287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.crt ...
	I0404 22:45:00.600935   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.crt: {Name:mk54f9f1612caced32be219199f158fb042fe7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:00.601123   58287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key ...
	I0404 22:45:00.601150   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key: {Name:mk13969b3bb0828a24bdc3988d9a6dd0026d2a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:00.601255   58287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:45:00.601279   58287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt.184368d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.247]
	I0404 22:45:00.844136   58287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt.184368d7 ...
	I0404 22:45:00.844171   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt.184368d7: {Name:mkf056b03c2f25f1a7abc2335073f4ee43d60a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:00.844372   58287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7 ...
	I0404 22:45:00.844400   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7: {Name:mk5b871efb52751dcaf6b3518d18d8d5cbbbbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:00.844495   58287 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt.184368d7 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt
	I0404 22:45:00.844590   58287 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7 -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key
	I0404 22:45:00.844649   58287 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:45:00.844664   58287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt with IP's: []
	I0404 22:45:01.094794   58287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt ...
	I0404 22:45:01.094829   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt: {Name:mk2776131d0a6646fa3ff672c6fc4807fe19190f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:01.095019   58287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key ...
	I0404 22:45:01.095036   58287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key: {Name:mkcedd13c3344ca6ec1bbbf49acc8f616caab82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:45:01.095261   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:45:01.095306   58287 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:45:01.095323   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:45:01.095358   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:45:01.095394   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:45:01.095434   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:45:01.095495   58287 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:45:01.096300   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:45:01.133260   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:45:01.168522   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:45:01.281676   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:45:01.313219   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:45:01.351426   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:45:01.386855   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:45:01.424264   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:45:01.474394   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:45:01.527491   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:45:01.566831   58287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:45:01.597057   58287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:45:01.622517   58287 ssh_runner.go:195] Run: openssl version
	I0404 22:45:01.629130   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:45:01.644901   58287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:45:01.650437   58287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:45:01.650496   58287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:45:01.657919   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:45:01.670228   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:45:01.686584   58287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:45:01.692421   58287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:45:01.692471   58287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:45:01.700038   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:45:01.718284   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:45:01.733468   58287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:45:01.739769   58287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:45:01.739833   58287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:45:01.747145   58287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:45:01.763551   58287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:45:01.769124   58287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 22:45:01.769205   58287 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:45:01.769313   58287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:45:01.769444   58287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:45:01.822004   58287 cri.go:89] found id: ""
	I0404 22:45:01.822086   58287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 22:45:01.836364   58287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:45:01.849483   58287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:45:01.862660   58287 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:45:01.862689   58287 kubeadm.go:156] found existing configuration files:
	
	I0404 22:45:01.862741   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:45:01.875715   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:45:01.875775   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:45:01.891862   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:45:01.907173   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:45:01.907252   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:45:01.919292   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:45:01.930530   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:45:01.930594   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:45:01.943317   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:45:01.954238   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:45:01.954319   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:45:01.965565   58287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:45:02.097992   58287 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:45:02.098282   58287 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:45:02.281112   58287 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:45:02.281273   58287 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:45:02.281410   58287 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:45:02.588058   58287 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:45:02.590307   58287 out.go:204]   - Generating certificates and keys ...
	I0404 22:45:02.590422   58287 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:45:02.590513   58287 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:45:02.926533   58287 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0404 22:45:03.086364   58287 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0404 22:45:03.239110   58287 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0404 22:45:03.465177   58287 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0404 22:45:03.571900   58287 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0404 22:45:03.572082   58287 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0404 22:45:03.954073   58287 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0404 22:45:03.954515   58287 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0404 22:45:04.123577   58287 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0404 22:45:04.265955   58287 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0404 22:45:04.985558   58287 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0404 22:45:04.985868   58287 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:45:05.215355   58287 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:45:05.487322   58287 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:45:05.740958   58287 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:45:05.921102   58287 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:45:05.948354   58287 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:45:05.948580   58287 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:45:05.948646   58287 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:45:06.133007   58287 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:45:06.135034   58287 out.go:204]   - Booting up control plane ...
	I0404 22:45:06.135172   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:45:06.139078   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:45:06.140586   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:45:06.142909   58287 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:45:06.150787   58287 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:45:46.143758   58287 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 22:45:46.144823   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:45:46.145210   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:45:51.145076   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:45:51.145401   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:46:01.144935   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:46:01.145246   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:46:21.144384   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:46:21.144703   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:47:01.145803   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:47:01.146342   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:47:01.146368   58287 kubeadm.go:309] 
	I0404 22:47:01.146454   58287 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 22:47:01.146559   58287 kubeadm.go:309] 		timed out waiting for the condition
	I0404 22:47:01.146584   58287 kubeadm.go:309] 
	I0404 22:47:01.146662   58287 kubeadm.go:309] 	This error is likely caused by:
	I0404 22:47:01.146746   58287 kubeadm.go:309] 		- The kubelet is not running
	I0404 22:47:01.146976   58287 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 22:47:01.146991   58287 kubeadm.go:309] 
	I0404 22:47:01.147216   58287 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 22:47:01.147263   58287 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 22:47:01.147330   58287 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 22:47:01.147345   58287 kubeadm.go:309] 
	I0404 22:47:01.147620   58287 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 22:47:01.147842   58287 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 22:47:01.147863   58287 kubeadm.go:309] 
	I0404 22:47:01.148114   58287 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 22:47:01.148350   58287 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 22:47:01.148574   58287 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 22:47:01.148840   58287 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 22:47:01.148877   58287 kubeadm.go:309] 
	I0404 22:47:01.149337   58287 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 22:47:01.149645   58287 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 22:47:01.149738   58287 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 22:47:01.149929   58287 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-343162] and IPs [192.168.39.247 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 22:47:01.149987   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:47:03.195508   58287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.04549059s)
	I0404 22:47:03.195579   58287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:47:03.215553   58287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:47:03.227615   58287 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:47:03.227648   58287 kubeadm.go:156] found existing configuration files:
	
	I0404 22:47:03.227700   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:47:03.239131   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:47:03.239190   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:47:03.251229   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:47:03.263452   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:47:03.263506   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:47:03.274843   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:47:03.285581   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:47:03.285671   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:47:03.296636   58287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:47:03.307131   58287 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:47:03.307193   58287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:47:03.318118   58287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:47:03.394460   58287 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:47:03.394609   58287 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:47:03.556156   58287 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:47:03.556332   58287 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:47:03.556485   58287 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:47:03.755324   58287 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:47:03.758253   58287 out.go:204]   - Generating certificates and keys ...
	I0404 22:47:03.758352   58287 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:47:03.758438   58287 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:47:03.758551   58287 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:47:03.758629   58287 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:47:03.758742   58287 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:47:03.758826   58287 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:47:03.758911   58287 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:47:03.758985   58287 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:47:03.759215   58287 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:47:03.759895   58287 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:47:03.759956   58287 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:47:03.760033   58287 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:47:04.010862   58287 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:47:04.287978   58287 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:47:04.425894   58287 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:47:04.552639   58287 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:47:04.578502   58287 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:47:04.580065   58287 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:47:04.580155   58287 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:47:04.788978   58287 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:47:04.790806   58287 out.go:204]   - Booting up control plane ...
	I0404 22:47:04.790954   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:47:04.799806   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:47:04.802355   58287 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:47:04.805148   58287 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:47:04.813249   58287 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:47:44.815862   58287 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 22:47:44.816297   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:47:44.816514   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:47:49.817135   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:47:49.817430   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:47:59.817721   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:47:59.817935   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:48:19.817491   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:48:19.817723   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:48:59.817497   58287 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 22:48:59.817709   58287 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 22:48:59.817874   58287 kubeadm.go:309] 
	I0404 22:48:59.817941   58287 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 22:48:59.818418   58287 kubeadm.go:309] 		timed out waiting for the condition
	I0404 22:48:59.818441   58287 kubeadm.go:309] 
	I0404 22:48:59.818474   58287 kubeadm.go:309] 	This error is likely caused by:
	I0404 22:48:59.818533   58287 kubeadm.go:309] 		- The kubelet is not running
	I0404 22:48:59.818678   58287 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 22:48:59.818690   58287 kubeadm.go:309] 
	I0404 22:48:59.818814   58287 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 22:48:59.818860   58287 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 22:48:59.818913   58287 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 22:48:59.818925   58287 kubeadm.go:309] 
	I0404 22:48:59.819050   58287 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 22:48:59.819186   58287 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 22:48:59.819202   58287 kubeadm.go:309] 
	I0404 22:48:59.819347   58287 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 22:48:59.819474   58287 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 22:48:59.819547   58287 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 22:48:59.819637   58287 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 22:48:59.819947   58287 kubeadm.go:309] 
	I0404 22:48:59.822501   58287 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 22:48:59.822597   58287 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 22:48:59.822665   58287 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 22:48:59.822727   58287 kubeadm.go:393] duration metric: took 3m58.05352537s to StartCluster
	I0404 22:48:59.822784   58287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:48:59.822857   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:48:59.876103   58287 cri.go:89] found id: ""
	I0404 22:48:59.876147   58287 logs.go:276] 0 containers: []
	W0404 22:48:59.876155   58287 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:48:59.876165   58287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:48:59.876217   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:48:59.914104   58287 cri.go:89] found id: ""
	I0404 22:48:59.914125   58287 logs.go:276] 0 containers: []
	W0404 22:48:59.914132   58287 logs.go:278] No container was found matching "etcd"
	I0404 22:48:59.914138   58287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:48:59.914181   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:48:59.951976   58287 cri.go:89] found id: ""
	I0404 22:48:59.952011   58287 logs.go:276] 0 containers: []
	W0404 22:48:59.952024   58287 logs.go:278] No container was found matching "coredns"
	I0404 22:48:59.952031   58287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:48:59.952104   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:48:59.988909   58287 cri.go:89] found id: ""
	I0404 22:48:59.988936   58287 logs.go:276] 0 containers: []
	W0404 22:48:59.988945   58287 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:48:59.988950   58287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:48:59.989009   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:49:00.028155   58287 cri.go:89] found id: ""
	I0404 22:49:00.028178   58287 logs.go:276] 0 containers: []
	W0404 22:49:00.028187   58287 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:49:00.028193   58287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:49:00.028274   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:49:00.066791   58287 cri.go:89] found id: ""
	I0404 22:49:00.066820   58287 logs.go:276] 0 containers: []
	W0404 22:49:00.066831   58287 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:49:00.066841   58287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:49:00.066900   58287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:49:00.103209   58287 cri.go:89] found id: ""
	I0404 22:49:00.103239   58287 logs.go:276] 0 containers: []
	W0404 22:49:00.103248   58287 logs.go:278] No container was found matching "kindnet"
	I0404 22:49:00.103258   58287 logs.go:123] Gathering logs for kubelet ...
	I0404 22:49:00.103269   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:49:00.168168   58287 logs.go:123] Gathering logs for dmesg ...
	I0404 22:49:00.168211   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:49:00.182866   58287 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:49:00.182898   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:49:00.304463   58287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:49:00.304483   58287 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:49:00.304498   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:49:00.401551   58287 logs.go:123] Gathering logs for container status ...
	I0404 22:49:00.401583   58287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 22:49:00.455361   58287 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 22:49:00.455407   58287 out.go:239] * 
	* 
	W0404 22:49:00.455481   58287 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 22:49:00.455516   58287 out.go:239] * 
	* 
	W0404 22:49:00.456511   58287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:49:00.460344   58287 out.go:177] 
	W0404 22:49:00.461674   58287 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 22:49:00.461748   58287 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 22:49:00.461766   58287 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 22:49:00.463533   58287 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 6 (239.216545ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:00.741234   64379 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-343162" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-952083 --alsologtostderr -v=3
E0404 22:47:05.180205   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-952083 --alsologtostderr -v=3: exit status 82 (2m0.576387436s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-952083"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:47:05.159662   63810 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:47:05.159794   63810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:05.159799   63810 out.go:304] Setting ErrFile to fd 2...
	I0404 22:47:05.159803   63810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:05.160003   63810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:47:05.160274   63810 out.go:298] Setting JSON to false
	I0404 22:47:05.160354   63810 mustload.go:65] Loading cluster: default-k8s-diff-port-952083
	I0404 22:47:05.160734   63810 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:47:05.160790   63810 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:47:05.160945   63810 mustload.go:65] Loading cluster: default-k8s-diff-port-952083
	I0404 22:47:05.161037   63810 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:47:05.161054   63810 stop.go:39] StopHost: default-k8s-diff-port-952083
	I0404 22:47:05.161403   63810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:47:05.161456   63810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:47:05.178906   63810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0404 22:47:05.180945   63810 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:47:05.181649   63810 main.go:141] libmachine: Using API Version  1
	I0404 22:47:05.181679   63810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:47:05.182261   63810 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:47:05.184889   63810 out.go:177] * Stopping node "default-k8s-diff-port-952083"  ...
	I0404 22:47:05.186288   63810 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 22:47:05.186328   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:47:05.188263   63810 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 22:47:05.188295   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:47:05.191923   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:47:05.192437   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:46:07 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:47:05.192515   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:47:05.192899   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:47:05.193099   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:47:05.193385   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:47:05.193582   63810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:47:05.324478   63810 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 22:47:05.389277   63810 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 22:47:05.451824   63810 main.go:141] libmachine: Stopping "default-k8s-diff-port-952083"...
	I0404 22:47:05.451862   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:47:05.453724   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Stop
	I0404 22:47:05.457986   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 0/120
	I0404 22:47:06.460305   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 1/120
	I0404 22:47:07.462605   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 2/120
	I0404 22:47:08.464467   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 3/120
	I0404 22:47:09.466587   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 4/120
	I0404 22:47:10.468411   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 5/120
	I0404 22:47:11.470612   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 6/120
	I0404 22:47:12.472051   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 7/120
	I0404 22:47:13.473324   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 8/120
	I0404 22:47:14.474785   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 9/120
	I0404 22:47:15.476817   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 10/120
	I0404 22:47:16.478808   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 11/120
	I0404 22:47:17.480169   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 12/120
	I0404 22:47:18.481347   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 13/120
	I0404 22:47:19.482663   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 14/120
	I0404 22:47:20.484344   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 15/120
	I0404 22:47:21.485706   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 16/120
	I0404 22:47:22.487061   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 17/120
	I0404 22:47:23.488602   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 18/120
	I0404 22:47:24.490901   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 19/120
	I0404 22:47:25.492907   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 20/120
	I0404 22:47:26.495375   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 21/120
	I0404 22:47:27.496809   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 22/120
	I0404 22:47:28.498241   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 23/120
	I0404 22:47:29.499637   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 24/120
	I0404 22:47:30.501429   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 25/120
	I0404 22:47:31.502915   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 26/120
	I0404 22:47:32.504294   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 27/120
	I0404 22:47:33.505627   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 28/120
	I0404 22:47:34.507042   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 29/120
	I0404 22:47:35.509331   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 30/120
	I0404 22:47:36.510587   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 31/120
	I0404 22:47:37.513584   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 32/120
	I0404 22:47:38.514983   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 33/120
	I0404 22:47:39.516512   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 34/120
	I0404 22:47:40.518519   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 35/120
	I0404 22:47:41.519717   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 36/120
	I0404 22:47:42.521215   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 37/120
	I0404 22:47:43.522777   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 38/120
	I0404 22:47:44.524413   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 39/120
	I0404 22:47:45.526819   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 40/120
	I0404 22:47:46.528245   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 41/120
	I0404 22:47:47.529711   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 42/120
	I0404 22:47:48.531171   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 43/120
	I0404 22:47:49.532754   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 44/120
	I0404 22:47:50.535049   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 45/120
	I0404 22:47:51.536533   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 46/120
	I0404 22:47:52.538424   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 47/120
	I0404 22:47:53.539832   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 48/120
	I0404 22:47:54.541447   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 49/120
	I0404 22:47:55.542851   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 50/120
	I0404 22:47:56.544182   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 51/120
	I0404 22:47:57.545859   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 52/120
	I0404 22:47:58.547429   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 53/120
	I0404 22:47:59.549226   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 54/120
	I0404 22:48:00.551477   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 55/120
	I0404 22:48:01.552828   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 56/120
	I0404 22:48:02.554383   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 57/120
	I0404 22:48:03.555893   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 58/120
	I0404 22:48:04.557935   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 59/120
	I0404 22:48:05.560214   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 60/120
	I0404 22:48:06.561692   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 61/120
	I0404 22:48:07.563189   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 62/120
	I0404 22:48:08.564766   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 63/120
	I0404 22:48:09.566906   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 64/120
	I0404 22:48:10.569296   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 65/120
	I0404 22:48:11.570641   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 66/120
	I0404 22:48:12.572261   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 67/120
	I0404 22:48:13.573625   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 68/120
	I0404 22:48:14.575465   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 69/120
	I0404 22:48:15.577251   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 70/120
	I0404 22:48:16.578805   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 71/120
	I0404 22:48:17.580335   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 72/120
	I0404 22:48:18.581902   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 73/120
	I0404 22:48:19.583585   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 74/120
	I0404 22:48:20.585702   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 75/120
	I0404 22:48:21.587267   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 76/120
	I0404 22:48:22.588925   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 77/120
	I0404 22:48:23.590374   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 78/120
	I0404 22:48:24.592071   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 79/120
	I0404 22:48:25.594508   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 80/120
	I0404 22:48:26.596136   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 81/120
	I0404 22:48:27.597535   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 82/120
	I0404 22:48:28.599130   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 83/120
	I0404 22:48:29.600606   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 84/120
	I0404 22:48:30.602972   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 85/120
	I0404 22:48:31.604489   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 86/120
	I0404 22:48:32.606098   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 87/120
	I0404 22:48:33.607615   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 88/120
	I0404 22:48:34.609361   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 89/120
	I0404 22:48:35.611481   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 90/120
	I0404 22:48:36.613413   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 91/120
	I0404 22:48:37.614766   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 92/120
	I0404 22:48:38.616468   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 93/120
	I0404 22:48:39.617812   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 94/120
	I0404 22:48:40.620291   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 95/120
	I0404 22:48:41.621964   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 96/120
	I0404 22:48:42.623567   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 97/120
	I0404 22:48:43.625018   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 98/120
	I0404 22:48:44.626862   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 99/120
	I0404 22:48:45.629289   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 100/120
	I0404 22:48:46.630716   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 101/120
	I0404 22:48:47.632144   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 102/120
	I0404 22:48:48.634051   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 103/120
	I0404 22:48:49.635606   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 104/120
	I0404 22:48:50.637619   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 105/120
	I0404 22:48:51.639148   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 106/120
	I0404 22:48:52.640676   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 107/120
	I0404 22:48:53.642217   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 108/120
	I0404 22:48:54.644532   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 109/120
	I0404 22:48:55.646967   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 110/120
	I0404 22:48:56.648312   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 111/120
	I0404 22:48:57.649973   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 112/120
	I0404 22:48:58.651244   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 113/120
	I0404 22:48:59.652958   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 114/120
	I0404 22:49:00.654549   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 115/120
	I0404 22:49:01.656284   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 116/120
	I0404 22:49:02.658862   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 117/120
	I0404 22:49:03.660240   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 118/120
	I0404 22:49:04.661828   63810 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for machine to stop 119/120
	I0404 22:49:05.662724   63810 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 22:49:05.662790   63810 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0404 22:49:05.664731   63810 out.go:177] 
	W0404 22:49:05.666075   63810 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0404 22:49:05.666096   63810 out.go:239] * 
	* 
	W0404 22:49:05.668746   63810 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:49:05.670062   63810 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-952083 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
E0404 22:49:08.061372   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:49:09.153434   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:49:13.465093   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.470433   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.480741   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.501162   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.541495   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.622074   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:13.782610   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:14.103288   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:14.350198   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:49:14.743694   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083: exit status 3 (18.556621792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:24.228409   64523 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host
	E0404 22:49:24.228427   64523 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-952083" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-143118 --alsologtostderr -v=3
E0404 22:47:18.492749   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-143118 --alsologtostderr -v=3: exit status 82 (2m0.532359223s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-143118"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:47:14.919410   63930 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:47:14.919520   63930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:14.919542   63930 out.go:304] Setting ErrFile to fd 2...
	I0404 22:47:14.919546   63930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:14.919739   63930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:47:14.919988   63930 out.go:298] Setting JSON to false
	I0404 22:47:14.920060   63930 mustload.go:65] Loading cluster: embed-certs-143118
	I0404 22:47:14.920428   63930 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:47:14.920490   63930 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:47:14.920637   63930 mustload.go:65] Loading cluster: embed-certs-143118
	I0404 22:47:14.920735   63930 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:47:14.920757   63930 stop.go:39] StopHost: embed-certs-143118
	I0404 22:47:14.921118   63930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:47:14.921172   63930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:47:14.936515   63930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0404 22:47:14.936949   63930 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:47:14.937607   63930 main.go:141] libmachine: Using API Version  1
	I0404 22:47:14.937645   63930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:47:14.938000   63930 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:47:14.940279   63930 out.go:177] * Stopping node "embed-certs-143118"  ...
	I0404 22:47:14.941970   63930 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 22:47:14.942011   63930 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:47:14.942225   63930 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 22:47:14.942260   63930 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:47:14.945008   63930 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:47:14.945484   63930 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:45:38 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:47:14.945527   63930 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:47:14.945609   63930 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:47:14.945770   63930 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:47:14.945932   63930 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:47:14.946108   63930 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:47:15.052424   63930 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 22:47:15.118427   63930 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 22:47:15.177847   63930 main.go:141] libmachine: Stopping "embed-certs-143118"...
	I0404 22:47:15.177879   63930 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:47:15.179417   63930 main.go:141] libmachine: (embed-certs-143118) Calling .Stop
	I0404 22:47:15.183362   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 0/120
	I0404 22:47:16.184792   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 1/120
	I0404 22:47:17.186403   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 2/120
	I0404 22:47:18.187877   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 3/120
	I0404 22:47:19.189265   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 4/120
	I0404 22:47:20.191200   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 5/120
	I0404 22:47:21.192666   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 6/120
	I0404 22:47:22.194203   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 7/120
	I0404 22:47:23.195584   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 8/120
	I0404 22:47:24.197140   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 9/120
	I0404 22:47:25.199254   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 10/120
	I0404 22:47:26.200709   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 11/120
	I0404 22:47:27.202166   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 12/120
	I0404 22:47:28.203534   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 13/120
	I0404 22:47:29.204884   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 14/120
	I0404 22:47:30.206905   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 15/120
	I0404 22:47:31.208090   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 16/120
	I0404 22:47:32.209534   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 17/120
	I0404 22:47:33.210916   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 18/120
	I0404 22:47:34.212472   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 19/120
	I0404 22:47:35.214720   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 20/120
	I0404 22:47:36.216388   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 21/120
	I0404 22:47:37.218649   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 22/120
	I0404 22:47:38.220312   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 23/120
	I0404 22:47:39.221931   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 24/120
	I0404 22:47:40.224211   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 25/120
	I0404 22:47:41.225517   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 26/120
	I0404 22:47:42.226967   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 27/120
	I0404 22:47:43.228599   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 28/120
	I0404 22:47:44.230023   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 29/120
	I0404 22:47:45.232416   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 30/120
	I0404 22:47:46.233845   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 31/120
	I0404 22:47:47.235290   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 32/120
	I0404 22:47:48.236889   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 33/120
	I0404 22:47:49.238414   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 34/120
	I0404 22:47:50.240592   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 35/120
	I0404 22:47:51.241809   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 36/120
	I0404 22:47:52.243534   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 37/120
	I0404 22:47:53.244981   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 38/120
	I0404 22:47:54.246350   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 39/120
	I0404 22:47:55.248661   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 40/120
	I0404 22:47:56.250715   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 41/120
	I0404 22:47:57.252216   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 42/120
	I0404 22:47:58.253590   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 43/120
	I0404 22:47:59.255332   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 44/120
	I0404 22:48:00.257494   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 45/120
	I0404 22:48:01.259103   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 46/120
	I0404 22:48:02.260611   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 47/120
	I0404 22:48:03.262267   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 48/120
	I0404 22:48:04.264006   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 49/120
	I0404 22:48:05.266372   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 50/120
	I0404 22:48:06.267786   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 51/120
	I0404 22:48:07.269272   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 52/120
	I0404 22:48:08.270863   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 53/120
	I0404 22:48:09.272381   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 54/120
	I0404 22:48:10.274963   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 55/120
	I0404 22:48:11.276360   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 56/120
	I0404 22:48:12.277875   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 57/120
	I0404 22:48:13.279673   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 58/120
	I0404 22:48:14.281459   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 59/120
	I0404 22:48:15.283662   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 60/120
	I0404 22:48:16.285118   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 61/120
	I0404 22:48:17.286804   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 62/120
	I0404 22:48:18.288488   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 63/120
	I0404 22:48:19.290123   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 64/120
	I0404 22:48:20.292595   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 65/120
	I0404 22:48:21.294363   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 66/120
	I0404 22:48:22.295951   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 67/120
	I0404 22:48:23.297493   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 68/120
	I0404 22:48:24.298993   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 69/120
	I0404 22:48:25.300436   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 70/120
	I0404 22:48:26.302208   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 71/120
	I0404 22:48:27.304200   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 72/120
	I0404 22:48:28.305790   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 73/120
	I0404 22:48:29.307754   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 74/120
	I0404 22:48:30.310132   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 75/120
	I0404 22:48:31.311744   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 76/120
	I0404 22:48:32.313337   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 77/120
	I0404 22:48:33.314762   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 78/120
	I0404 22:48:34.316540   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 79/120
	I0404 22:48:35.319011   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 80/120
	I0404 22:48:36.320505   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 81/120
	I0404 22:48:37.322788   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 82/120
	I0404 22:48:38.324164   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 83/120
	I0404 22:48:39.325646   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 84/120
	I0404 22:48:40.328180   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 85/120
	I0404 22:48:41.329834   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 86/120
	I0404 22:48:42.331427   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 87/120
	I0404 22:48:43.333158   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 88/120
	I0404 22:48:44.334468   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 89/120
	I0404 22:48:45.335911   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 90/120
	I0404 22:48:46.337411   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 91/120
	I0404 22:48:47.338711   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 92/120
	I0404 22:48:48.340095   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 93/120
	I0404 22:48:49.341587   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 94/120
	I0404 22:48:50.343781   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 95/120
	I0404 22:48:51.345357   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 96/120
	I0404 22:48:52.346983   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 97/120
	I0404 22:48:53.348655   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 98/120
	I0404 22:48:54.350257   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 99/120
	I0404 22:48:55.351860   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 100/120
	I0404 22:48:56.353569   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 101/120
	I0404 22:48:57.355050   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 102/120
	I0404 22:48:58.356591   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 103/120
	I0404 22:48:59.359340   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 104/120
	I0404 22:49:00.361595   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 105/120
	I0404 22:49:01.362916   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 106/120
	I0404 22:49:02.364648   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 107/120
	I0404 22:49:03.366404   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 108/120
	I0404 22:49:04.367694   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 109/120
	I0404 22:49:05.369417   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 110/120
	I0404 22:49:06.370928   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 111/120
	I0404 22:49:07.372692   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 112/120
	I0404 22:49:08.374243   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 113/120
	I0404 22:49:09.375972   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 114/120
	I0404 22:49:10.378031   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 115/120
	I0404 22:49:11.379877   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 116/120
	I0404 22:49:12.381762   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 117/120
	I0404 22:49:13.383586   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 118/120
	I0404 22:49:14.385247   63930 main.go:141] libmachine: (embed-certs-143118) Waiting for machine to stop 119/120
	I0404 22:49:15.386592   63930 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 22:49:15.386676   63930 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0404 22:49:15.388948   63930 out.go:177] 
	W0404 22:49:15.390533   63930 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0404 22:49:15.390549   63930 out.go:239] * 
	* 
	W0404 22:49:15.393480   63930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:49:15.394911   63930 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-143118 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
E0404 22:49:16.024414   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:18.585610   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:23.706161   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118: exit status 3 (18.559638895s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:33.956432   64575 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0404 22:49:33.956452   64575 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-143118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-024416 --alsologtostderr -v=3
E0404 22:47:46.140735   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:47:49.213385   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:52.429098   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:48:09.142865   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:48:30.173726   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:48:48.670507   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.675849   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.686116   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.706415   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.746823   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.827218   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:48.987697   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:49.308525   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:49.949460   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:50.480181   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:48:51.230558   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:53.791786   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:48:58.913004   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-024416 --alsologtostderr -v=3: exit status 82 (2m0.545381856s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-024416"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:47:31.990375   64052 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:47:31.990913   64052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:31.990935   64052 out.go:304] Setting ErrFile to fd 2...
	I0404 22:47:31.990942   64052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:47:31.991357   64052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:47:31.992061   64052 out.go:298] Setting JSON to false
	I0404 22:47:31.992179   64052 mustload.go:65] Loading cluster: no-preload-024416
	I0404 22:47:31.992584   64052 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:47:31.992663   64052 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:47:31.992853   64052 mustload.go:65] Loading cluster: no-preload-024416
	I0404 22:47:31.992976   64052 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:47:31.993011   64052 stop.go:39] StopHost: no-preload-024416
	I0404 22:47:31.993446   64052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:47:31.993531   64052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:47:32.007933   64052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0404 22:47:32.008416   64052 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:47:32.008968   64052 main.go:141] libmachine: Using API Version  1
	I0404 22:47:32.008991   64052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:47:32.009401   64052 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:47:32.011857   64052 out.go:177] * Stopping node "no-preload-024416"  ...
	I0404 22:47:32.013369   64052 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0404 22:47:32.013399   64052 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:47:32.013607   64052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0404 22:47:32.013629   64052 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:47:32.016426   64052 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:47:32.016832   64052 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:45:09 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:47:32.016877   64052 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:47:32.016995   64052 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:47:32.017173   64052 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:47:32.017393   64052 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:47:32.017548   64052 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:47:32.140146   64052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0404 22:47:32.203221   64052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0404 22:47:32.266415   64052 main.go:141] libmachine: Stopping "no-preload-024416"...
	I0404 22:47:32.266442   64052 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:47:32.267950   64052 main.go:141] libmachine: (no-preload-024416) Calling .Stop
	I0404 22:47:32.271638   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 0/120
	I0404 22:47:33.273122   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 1/120
	I0404 22:47:34.274599   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 2/120
	I0404 22:47:35.275955   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 3/120
	I0404 22:47:36.277345   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 4/120
	I0404 22:47:37.279432   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 5/120
	I0404 22:47:38.280838   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 6/120
	I0404 22:47:39.282103   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 7/120
	I0404 22:47:40.283853   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 8/120
	I0404 22:47:41.285079   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 9/120
	I0404 22:47:42.286383   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 10/120
	I0404 22:47:43.287742   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 11/120
	I0404 22:47:44.289171   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 12/120
	I0404 22:47:45.290319   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 13/120
	I0404 22:47:46.291577   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 14/120
	I0404 22:47:47.293941   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 15/120
	I0404 22:47:48.295521   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 16/120
	I0404 22:47:49.296937   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 17/120
	I0404 22:47:50.298603   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 18/120
	I0404 22:47:51.300031   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 19/120
	I0404 22:47:52.301662   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 20/120
	I0404 22:47:53.303510   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 21/120
	I0404 22:47:54.304978   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 22/120
	I0404 22:47:55.306478   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 23/120
	I0404 22:47:56.308220   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 24/120
	I0404 22:47:57.310318   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 25/120
	I0404 22:47:58.311814   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 26/120
	I0404 22:47:59.313657   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 27/120
	I0404 22:48:00.315163   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 28/120
	I0404 22:48:01.316606   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 29/120
	I0404 22:48:02.318775   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 30/120
	I0404 22:48:03.320528   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 31/120
	I0404 22:48:04.321956   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 32/120
	I0404 22:48:05.323352   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 33/120
	I0404 22:48:06.324555   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 34/120
	I0404 22:48:07.326806   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 35/120
	I0404 22:48:08.328221   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 36/120
	I0404 22:48:09.329433   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 37/120
	I0404 22:48:10.330816   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 38/120
	I0404 22:48:11.332535   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 39/120
	I0404 22:48:12.334926   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 40/120
	I0404 22:48:13.336321   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 41/120
	I0404 22:48:14.337877   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 42/120
	I0404 22:48:15.339365   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 43/120
	I0404 22:48:16.340970   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 44/120
	I0404 22:48:17.343518   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 45/120
	I0404 22:48:18.344863   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 46/120
	I0404 22:48:19.346473   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 47/120
	I0404 22:48:20.348378   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 48/120
	I0404 22:48:21.349830   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 49/120
	I0404 22:48:22.352592   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 50/120
	I0404 22:48:23.354335   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 51/120
	I0404 22:48:24.355631   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 52/120
	I0404 22:48:25.357303   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 53/120
	I0404 22:48:26.359287   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 54/120
	I0404 22:48:27.361670   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 55/120
	I0404 22:48:28.363381   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 56/120
	I0404 22:48:29.365252   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 57/120
	I0404 22:48:30.366935   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 58/120
	I0404 22:48:31.368467   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 59/120
	I0404 22:48:32.369855   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 60/120
	I0404 22:48:33.371245   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 61/120
	I0404 22:48:34.372649   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 62/120
	I0404 22:48:35.373985   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 63/120
	I0404 22:48:36.375273   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 64/120
	I0404 22:48:37.377573   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 65/120
	I0404 22:48:38.378978   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 66/120
	I0404 22:48:39.380556   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 67/120
	I0404 22:48:40.382910   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 68/120
	I0404 22:48:41.384611   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 69/120
	I0404 22:48:42.386930   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 70/120
	I0404 22:48:43.388448   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 71/120
	I0404 22:48:44.390301   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 72/120
	I0404 22:48:45.391725   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 73/120
	I0404 22:48:46.393069   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 74/120
	I0404 22:48:47.395238   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 75/120
	I0404 22:48:48.396589   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 76/120
	I0404 22:48:49.398252   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 77/120
	I0404 22:48:50.399717   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 78/120
	I0404 22:48:51.401445   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 79/120
	I0404 22:48:52.402912   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 80/120
	I0404 22:48:53.404465   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 81/120
	I0404 22:48:54.406016   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 82/120
	I0404 22:48:55.407560   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 83/120
	I0404 22:48:56.409268   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 84/120
	I0404 22:48:57.412024   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 85/120
	I0404 22:48:58.413533   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 86/120
	I0404 22:48:59.415384   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 87/120
	I0404 22:49:00.416861   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 88/120
	I0404 22:49:01.418247   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 89/120
	I0404 22:49:02.420653   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 90/120
	I0404 22:49:03.423255   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 91/120
	I0404 22:49:04.424703   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 92/120
	I0404 22:49:05.426443   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 93/120
	I0404 22:49:06.428273   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 94/120
	I0404 22:49:07.430480   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 95/120
	I0404 22:49:08.431994   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 96/120
	I0404 22:49:09.433518   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 97/120
	I0404 22:49:10.435085   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 98/120
	I0404 22:49:11.436801   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 99/120
	I0404 22:49:12.439069   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 100/120
	I0404 22:49:13.440707   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 101/120
	I0404 22:49:14.442172   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 102/120
	I0404 22:49:15.443566   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 103/120
	I0404 22:49:16.445011   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 104/120
	I0404 22:49:17.447404   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 105/120
	I0404 22:49:18.449122   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 106/120
	I0404 22:49:19.450727   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 107/120
	I0404 22:49:20.452390   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 108/120
	I0404 22:49:21.454122   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 109/120
	I0404 22:49:22.455615   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 110/120
	I0404 22:49:23.457152   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 111/120
	I0404 22:49:24.458920   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 112/120
	I0404 22:49:25.460355   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 113/120
	I0404 22:49:26.461910   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 114/120
	I0404 22:49:27.463970   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 115/120
	I0404 22:49:28.465598   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 116/120
	I0404 22:49:29.467056   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 117/120
	I0404 22:49:30.468786   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 118/120
	I0404 22:49:31.470222   64052 main.go:141] libmachine: (no-preload-024416) Waiting for machine to stop 119/120
	I0404 22:49:32.471628   64052 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0404 22:49:32.471693   64052 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0404 22:49:32.474104   64052 out.go:177] 
	W0404 22:49:32.475763   64052 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0404 22:49:32.475782   64052 out.go:239] * 
	* 
	W0404 22:49:32.478263   64052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 22:49:32.479585   64052 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-024416 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416: exit status 3 (18.626949546s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:51.108535   64690 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host
	E0404 22:49:51.108561   64690 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-024416" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-343162 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-343162 create -f testdata/busybox.yaml: exit status 1 (50.359292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-343162" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-343162 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 6 (231.919995ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:01.023616   64420 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-343162" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 6 (234.524794ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:01.259084   64450 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-343162" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-343162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-343162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m48.322691398s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-343162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-343162 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-343162 describe deploy/metrics-server -n kube-system: exit status 1 (43.888903ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-343162" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-343162 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 6 (237.536859ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:50:49.862765   65256 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-343162" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083: exit status 3 (3.168161544s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:27.396533   64617 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host
	E0404 22:49:27.396556   64617 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-952083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0404 22:49:29.634522   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-952083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153324229s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-952083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
E0404 22:49:33.946363   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083: exit status 3 (3.06257959s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:36.612568   64720 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host
	E0404 22:49:36.612589   64720 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-952083" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118: exit status 3 (3.167914045s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:37.124540   64750 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0404 22:49:37.124567   64750 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-143118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0404 22:49:37.225955   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:37.546560   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:38.187547   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:39.468712   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:42.029751   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-143118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152688284s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-143118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118: exit status 3 (3.062958266s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:46.340509   64861 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0404 22:49:46.340530   64861 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-143118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
E0404 22:49:52.093917   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416: exit status 3 (3.167922002s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:49:54.276502   64947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host
	E0404 22:49:54.276529   64947 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-024416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0404 22:49:54.427477   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:49:57.391145   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-024416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152656903s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-024416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416: exit status 3 (3.063097268s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0404 22:50:03.492598   65017 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host
	E0404 22:50:03.492623   65017 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-024416" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (751.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0404 22:50:58.831997   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:51:04.561934   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:51:24.217085   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:51:30.506110   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:51:32.516419   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:51:45.522256   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:51:51.902028   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:51:57.308209   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:51:58.191186   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:52:08.250849   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:52:20.752337   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:52:35.934754   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:53:07.443158   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:53:09.142351   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:53:48.669742   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:53:50.480665   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:54:13.465155   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:54:16.357491   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:54:32.189232   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:54:36.908112   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:54:41.149437   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:55:04.593362   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:55:23.598508   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:55:51.284081   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:56:24.216557   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:56:30.506312   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:57:08.250438   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:58:09.142712   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:58:48.670527   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:58:50.480178   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:59:13.464682   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m27.6508163s)

                                                
                                                
-- stdout --
	* [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	* 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	* 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-343162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (256.792244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25: (1.649043387s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.932824374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712271805932790324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=749fe3c8-1f6f-4d90-af81-042611d28690 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.933642721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f80d58-6a78-46d6-ac1e-2d1fc1d658d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.933745742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f80d58-6a78-46d6-ac1e-2d1fc1d658d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.933801707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f7f80d58-6a78-46d6-ac1e-2d1fc1d658d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.973346976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=395c3950-60fc-4468-baa1-7b97855adec4 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.973459648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=395c3950-60fc-4468-baa1-7b97855adec4 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.975254230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6fe78dc-f200-4943-8d87-9079915cde9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.975960658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712271805975929183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6fe78dc-f200-4943-8d87-9079915cde9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.976872395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17be2378-6810-4550-aaf4-1f8b2c506ab4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.976940931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17be2378-6810-4550-aaf4-1f8b2c506ab4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:25 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:25.976985676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17be2378-6810-4550-aaf4-1f8b2c506ab4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.018588217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c4f7dc8-466d-46c7-b290-aab9e7546172 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.018694476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c4f7dc8-466d-46c7-b290-aab9e7546172 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.020420955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dcf83b9-0918-4a5e-b6cf-39b0a3f5757f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.021029620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712271806020993823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dcf83b9-0918-4a5e-b6cf-39b0a3f5757f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.021934054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=049f4919-da02-400d-a9c6-06eab8d6a349 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.022026332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=049f4919-da02-400d-a9c6-06eab8d6a349 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.022091125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=049f4919-da02-400d-a9c6-06eab8d6a349 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.061460229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66e32c52-ae8c-41e7-aed8-e5e43ffd42e8 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.061669637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66e32c52-ae8c-41e7-aed8-e5e43ffd42e8 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.063297744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b60b4ad9-3ace-4ec5-8fc3-9a7e4277e1df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.063753198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712271806063728774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b60b4ad9-3ace-4ec5-8fc3-9a7e4277e1df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.064560975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2ade05b-b0bf-47c2-93a8-c3dd8d91da42 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.064619454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2ade05b-b0bf-47c2-93a8-c3dd8d91da42 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:03:26 old-k8s-version-343162 crio[651]: time="2024-04-04 23:03:26.064657223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2ade05b-b0bf-47c2-93a8-c3dd8d91da42 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041693] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 4 22:55] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.993320] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.724891] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.065551] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096758] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.200608] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.163985] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312462] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.410618] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.075387] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.725033] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +11.687086] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 4 22:59] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 4 23:01] systemd-fstab-generator[5232]: Ignoring "noauto" option for root device
	[  +0.067974] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:03:26 up 8 min,  0 users,  load average: 0.11, 0.09, 0.03
	Linux old-k8s-version-343162 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006316f0)
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6bef0, 0x4f0ac20, 0xc000a45900, 0x1, 0xc00009e0c0)
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b92620, 0xc00009e0c0)
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000befcf0, 0xc00088daa0)
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5411]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 04 23:03:23 old-k8s-version-343162 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 04 23:03:23 old-k8s-version-343162 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 04 23:03:23 old-k8s-version-343162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 04 23:03:23 old-k8s-version-343162 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 04 23:03:23 old-k8s-version-343162 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5470]: I0404 23:03:23.978362    5470 server.go:416] Version: v1.20.0
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5470]: I0404 23:03:23.978745    5470 server.go:837] Client rotation is on, will bootstrap in background
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5470]: I0404 23:03:23.981011    5470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5470]: I0404 23:03:23.982365    5470 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 04 23:03:23 old-k8s-version-343162 kubelet[5470]: W0404 23:03:23.982389    5470 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (258.596387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-343162" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (751.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0404 22:59:36.908010   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-143118 -n embed-certs-143118
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:08:14.746169757 +0000 UTC m=+5950.609155467
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-143118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-143118 logs -n 25: (2.147280506s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.291732479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272096291708655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26055c0d-7fb7-4a06-b136-78f3d94b82c1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.292402859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=743b4b42-ef2d-475e-9744-cd35a0c75774 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.292510144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=743b4b42-ef2d-475e-9744-cd35a0c75774 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.292745240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=743b4b42-ef2d-475e-9744-cd35a0c75774 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.332146464Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=041caaa4-6e68-47ac-8b5b-fcc58ebbe887 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.332242443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=041caaa4-6e68-47ac-8b5b-fcc58ebbe887 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.334137265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f90b2b5-efb9-4400-a23d-2775aa58b8ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.334660109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272096334636567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f90b2b5-efb9-4400-a23d-2775aa58b8ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.335280810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c799379-230b-4b5d-b92b-de628d7c52ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.335479694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c799379-230b-4b5d-b92b-de628d7c52ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.335836292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c799379-230b-4b5d-b92b-de628d7c52ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.381007431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3841ac5a-3679-4937-9117-bea31adde715 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.381084525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3841ac5a-3679-4937-9117-bea31adde715 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.382657388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd5dbda7-7a2f-4818-b989-8f3b99700fb0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.383210474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272096383185460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd5dbda7-7a2f-4818-b989-8f3b99700fb0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.384226396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f73eb341-4466-4478-ba01-eaf1cc3cce6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.384291120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f73eb341-4466-4478-ba01-eaf1cc3cce6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.384646042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f73eb341-4466-4478-ba01-eaf1cc3cce6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.421137896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbde0f1e-6df5-47b6-a66c-e23276c6bc96 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.421209203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbde0f1e-6df5-47b6-a66c-e23276c6bc96 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.422549526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cdbc769-a5dc-4333-a593-b1e58ae6dc98 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.423022973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272096422999178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cdbc769-a5dc-4333-a593-b1e58ae6dc98 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.423625675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e3a38ba-b814-43b0-9976-0afc757ade38 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.423854640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e3a38ba-b814-43b0-9976-0afc757ade38 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:16 embed-certs-143118 crio[727]: time="2024-04-04 23:08:16.424059817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e3a38ba-b814-43b0-9976-0afc757ade38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	634138d6bde20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   261fceb686acc       storage-provisioner
	eb6f64a5cb6eb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   63f396ce437fa       busybox
	712b227f7cfb0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   746efa3d6e456       coredns-76f75df574-9qh9s
	6c047a719f155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   261fceb686acc       storage-provisioner
	27fc077394a7d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago      Running             kube-proxy                1                   b8dda25455029       kube-proxy-psst7
	ecdd813ae02e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   816c2f4344e13       etcd-embed-certs-143118
	46137dbe2189d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago      Running             kube-scheduler            1                   27b7cde0b4274       kube-scheduler-embed-certs-143118
	31cb759c8e7bc       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      13 minutes ago      Running             kube-apiserver            1                   e3d209d7c560b       kube-apiserver-embed-certs-143118
	58b9430fea2e8       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      13 minutes ago      Running             kube-controller-manager   1                   adce183fc81d7       kube-controller-manager-embed-certs-143118
	
	
	==> coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39408 - 45167 "HINFO IN 2687519719721392437.3884915358984449562. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058065701s
	
	
	==> describe nodes <==
	Name:               embed-certs-143118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-143118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=embed-certs-143118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_46_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-143118
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:08:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:05:30 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:05:30 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:05:30 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:05:30 +0000   Thu, 04 Apr 2024 22:54:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.137
	  Hostname:    embed-certs-143118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7148ba4297d4a75bed2c7ff809a89d8
	  System UUID:                a7148ba4-297d-4a75-bed2-c7ff809a89d8
	  Boot ID:                    4f0c6e40-0013-4670-ae75-864aac291198
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-9qh9s                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-143118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-143118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-143118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-psst7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-143118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-xwm4m               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-143118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-143118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-143118 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                22m                kubelet          Node embed-certs-143118 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-143118 event: Registered Node embed-certs-143118 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-143118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-143118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-143118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-143118 event: Registered Node embed-certs-143118 in Controller
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052957] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541196] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.830176] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643540] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.437015] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.058374] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059203] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.210488] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.131081] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.318465] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.820921] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.063481] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.398118] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.599380] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.970950] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[  +5.295483] kauditd_printk_skb: 78 callbacks suppressed
	[Apr 4 22:55] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] <==
	{"level":"info","ts":"2024-04-04T22:54:46.334757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd68190d43a88764 elected leader cd68190d43a88764 at term 3"}
	{"level":"info","ts":"2024-04-04T22:54:46.371531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:54:46.371458Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cd68190d43a88764","local-member-attributes":"{Name:embed-certs-143118 ClientURLs:[https://192.168.61.137:2379]}","request-path":"/0/members/cd68190d43a88764/attributes","cluster-id":"c81a097889804662","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T22:54:46.372308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T22:54:46.373427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T22:54:46.373557Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T22:54:46.374589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T22:54:46.377906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.137:2379"}
	{"level":"warn","ts":"2024-04-04T22:55:42.021966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.03835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" ","response":"range_response_count:1 size:4239"}
	{"level":"info","ts":"2024-04-04T22:55:42.022089Z","caller":"traceutil/trace.go:171","msg":"trace[712388184] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m; range_end:; response_count:1; response_revision:631; }","duration":"197.238434ms","start":"2024-04-04T22:55:41.824825Z","end":"2024-04-04T22:55:42.022063Z","steps":["trace[712388184] 'range keys from in-memory index tree'  (duration: 196.850495ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:55:43.043626Z","caller":"traceutil/trace.go:171","msg":"trace[1362214223] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"306.368385ms","start":"2024-04-04T22:55:42.73724Z","end":"2024-04-04T22:55:43.043609Z","steps":["trace[1362214223] 'process raft request'  (duration: 306.218101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.044798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:42.737229Z","time spent":"306.691682ms","remote":"127.0.0.1:44490","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":813,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2d8c31\" mod_revision:610 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2d8c31\" value_size:718 lease:532707522408767873 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2d8c31\" > >"}
	{"level":"warn","ts":"2024-04-04T22:55:43.595723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.12563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9756079559263544498 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" mod_revision:622 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" value_size:4202 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T22:55:43.595821Z","caller":"traceutil/trace.go:171","msg":"trace[1163610137] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:685; }","duration":"770.502068ms","start":"2024-04-04T22:55:42.825293Z","end":"2024-04-04T22:55:43.595795Z","steps":["trace[1163610137] 'read index received'  (duration: 218.528802ms)","trace[1163610137] 'applied index is now lower than readState.Index'  (duration: 551.972531ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:55:43.595988Z","caller":"traceutil/trace.go:171","msg":"trace[681328634] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"855.461768ms","start":"2024-04-04T22:55:42.740518Z","end":"2024-04-04T22:55:43.595979Z","steps":["trace[681328634] 'process raft request'  (duration: 603.910856ms)","trace[681328634] 'compare'  (duration: 250.983624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T22:55:43.596066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:42.740507Z","time spent":"855.527348ms","remote":"127.0.0.1:44596","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4268,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" mod_revision:622 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" value_size:4202 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" > >"}
	{"level":"warn","ts":"2024-04-04T22:55:43.596215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"770.921421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-04-04T22:55:43.596256Z","caller":"traceutil/trace.go:171","msg":"trace[221362184] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m; range_end:; response_count:1; response_revision:633; }","duration":"770.981847ms","start":"2024-04-04T22:55:42.825268Z","end":"2024-04-04T22:55:43.59625Z","steps":["trace[221362184] 'agreement among raft nodes before linearized reading'  (duration: 770.916402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.596277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:42.825256Z","time spent":"771.016422ms","remote":"127.0.0.1:44596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4307,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.596538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"544.018085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f\" ","response":"range_response_count:1 size:784"}
	{"level":"info","ts":"2024-04-04T22:55:43.596654Z","caller":"traceutil/trace.go:171","msg":"trace[2075725438] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f; range_end:; response_count:1; response_revision:633; }","duration":"544.322923ms","start":"2024-04-04T22:55:43.05232Z","end":"2024-04-04T22:55:43.596643Z","steps":["trace[2075725438] 'agreement among raft nodes before linearized reading'  (duration: 544.155919ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.596699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.052305Z","time spent":"544.386891ms","remote":"127.0.0.1:44490","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":808,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f\" "}
	{"level":"info","ts":"2024-04-04T23:04:46.423175Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-04-04T23:04:46.437809Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"13.819656ms","hash":3509900284,"current-db-size-bytes":2654208,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2654208,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-04-04T23:04:46.437878Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3509900284,"revision":841,"compact-revision":-1}
	
	
	==> kernel <==
	 23:08:16 up 13 min,  0 users,  load average: 0.03, 0.07, 0.08
	Linux embed-certs-143118 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] <==
	I0404 23:02:48.803789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:04:47.805900       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:04:47.806089       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:04:48.806706       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:04:48.806845       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:04:48.806858       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:04:48.807059       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:04:48.807120       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:04:48.808398       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:48.807598       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:48.807809       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:05:48.807818       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:48.808662       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:48.808768       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:05:48.808859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:07:48.808585       1 handler_proxy.go:93] no RequestInfo found in the context
	W0404 23:07:48.808985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:07:48.809048       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:07:48.809057       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0404 23:07:48.808996       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:07:48.810766       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] <==
	I0404 23:02:31.148457       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:03:00.682240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:03:01.156036       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:03:30.692504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:03:31.166677       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:04:00.697010       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:04:01.177392       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:04:30.702599       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:04:31.186281       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:00.707913       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:01.197072       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:30.714836       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:31.206579       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:05:53.752926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="264.578µs"
	E0404 23:06:00.720411       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:01.216759       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:06:04.749008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="88.803µs"
	E0404 23:06:30.725330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:31.228645       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:07:00.731897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:01.239110       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:07:30.737534       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:31.248236       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:08:00.743570       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:08:01.257025       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] <==
	I0404 22:54:49.301146       1 server_others.go:72] "Using iptables proxy"
	I0404 22:54:49.325662       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.137"]
	I0404 22:54:49.382786       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:54:49.382807       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:54:49.382823       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:54:49.385867       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:54:49.386146       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:54:49.386158       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:54:49.387646       1 config.go:188] "Starting service config controller"
	I0404 22:54:49.387688       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:54:49.387707       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:54:49.387712       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:54:49.388074       1 config.go:315] "Starting node config controller"
	I0404 22:54:49.388111       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:54:49.490483       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:54:49.490859       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:54:49.490935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] <==
	I0404 22:54:45.703174       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:54:47.728910       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:54:47.728964       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:54:47.728976       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:54:47.728982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:54:47.821082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0404 22:54:47.821132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:54:47.827907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:54:47.828039       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:54:47.828053       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:54:47.828066       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:54:47.928152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:05:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:05:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:05:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:05:53 embed-certs-143118 kubelet[942]: E0404 23:05:53.734609     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:06:04 embed-certs-143118 kubelet[942]: E0404 23:06:04.735128     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:06:16 embed-certs-143118 kubelet[942]: E0404 23:06:16.732181     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:06:30 embed-certs-143118 kubelet[942]: E0404 23:06:30.732429     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:06:43 embed-certs-143118 kubelet[942]: E0404 23:06:43.757715     942 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:06:43 embed-certs-143118 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:06:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:06:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:06:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:06:44 embed-certs-143118 kubelet[942]: E0404 23:06:44.735250     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:06:57 embed-certs-143118 kubelet[942]: E0404 23:06:57.732695     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:07:10 embed-certs-143118 kubelet[942]: E0404 23:07:10.732776     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:07:25 embed-certs-143118 kubelet[942]: E0404 23:07:25.736126     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:07:37 embed-certs-143118 kubelet[942]: E0404 23:07:37.733141     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:07:43 embed-certs-143118 kubelet[942]: E0404 23:07:43.757882     942 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:07:43 embed-certs-143118 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:07:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:07:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:07:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:07:52 embed-certs-143118 kubelet[942]: E0404 23:07:52.732204     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:08:03 embed-certs-143118 kubelet[942]: E0404 23:08:03.732223     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:08:14 embed-certs-143118 kubelet[942]: E0404 23:08:14.733769     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	
	
	==> storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] <==
	I0404 22:55:20.097608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 22:55:20.115046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 22:55:20.115169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 22:55:37.526898       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 22:55:37.527410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa!
	I0404 22:55:37.529269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2135055d-48db-48ff-a18c-7eb1367f3d59", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa became leader
	I0404 22:55:37.627974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa!
	
	
	==> storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] <==
	I0404 22:54:49.264544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0404 22:55:19.269086       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-143118 -n embed-certs-143118
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-143118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xwm4m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m: exit status 1 (67.469318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xwm4m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0404 23:00:23.599232   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-024416 -n no-preload-024416
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:08:50.06548365 +0000 UTC m=+5985.928469355
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-024416 logs -n 25
E0404 23:08:50.480443   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-024416 logs -n 25: (2.105411503s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.595018414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272131594994042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9437ef8a-427d-41b0-a3ac-241d7fa03673 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.595588424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=008a1542-b574-404b-847d-f563c8c969c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.595638735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=008a1542-b574-404b-847d-f563c8c969c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.595835525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=008a1542-b574-404b-847d-f563c8c969c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.638362029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b1d358c-de58-4543-aaab-025edcb740e1 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.638465027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b1d358c-de58-4543-aaab-025edcb740e1 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.639579825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd8e5f61-fe78-45e2-950a-e8bc53deda2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.640199399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272131640175072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd8e5f61-fe78-45e2-950a-e8bc53deda2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.640711763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c29c7626-e4eb-4ab0-81bf-47b02527340c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.640767993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c29c7626-e4eb-4ab0-81bf-47b02527340c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.641097746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c29c7626-e4eb-4ab0-81bf-47b02527340c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.682829472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c7c0791-d850-4404-863a-b3ad9be0053a name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.682959218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c7c0791-d850-4404-863a-b3ad9be0053a name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.684695453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b403adf-c5bf-4c65-b3a6-16b9b487c509 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.685252782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272131685210311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b403adf-c5bf-4c65-b3a6-16b9b487c509 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.685943497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4ea6504-4e99-47fc-a8b5-87f40af74936 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.685998142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4ea6504-4e99-47fc-a8b5-87f40af74936 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.686350280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4ea6504-4e99-47fc-a8b5-87f40af74936 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.723691640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eafc9035-c433-4c21-b3bc-b61e0d09a670 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.724137843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eafc9035-c433-4c21-b3bc-b61e0d09a670 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.725656210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=523c27f2-47e4-4c6a-9b49-418298171242 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.726764189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272131726698679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=523c27f2-47e4-4c6a-9b49-418298171242 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.728003202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d773887-2103-49b7-8583-a833f17bda2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.728119319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d773887-2103-49b7-8583-a833f17bda2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:08:51 no-preload-024416 crio[723]: time="2024-04-04 23:08:51.728861900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d773887-2103-49b7-8583-a833f17bda2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11c58a1830991       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   bc81d5b907b24       storage-provisioner
	882004c8da33f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   37c98ea2e0c35       busybox
	b193f00fa4600       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   1a4d3439c1ebd       coredns-7db6d8ff4d-wr424
	608d21b5e121f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   bc81d5b907b24       storage-provisioner
	fb4517a71e257       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652                                      13 minutes ago      Running             kube-proxy                1                   023bdf3b3dea5       kube-proxy-zmx89
	d3b7424b0efb3       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5                                      13 minutes ago      Running             kube-scheduler            1                   896ee236d9b00       kube-scheduler-no-preload-024416
	edeb6b8feb7b1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   b8d66dc2b6cd6       etcd-no-preload-024416
	06183daed52cd       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a                                      13 minutes ago      Running             kube-controller-manager   1                   d37b6c573061c       kube-controller-manager-no-preload-024416
	ecfe112abbd47       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3                                      13 minutes ago      Running             kube-apiserver            1                   5f39b041e61d1       kube-apiserver-no-preload-024416
	
	
	==> coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35245 - 52393 "HINFO IN 7345092753685362976.4093367830504548005. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009638879s
	
	
	==> describe nodes <==
	Name:               no-preload-024416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-024416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=no-preload-024416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_46_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:46:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-024416
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 22:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.77
	  Hostname:    no-preload-024416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e91563183734070bb442c9a633fdfac
	  System UUID:                4e915631-8373-4070-bb44-2c9a633fdfac
	  Boot ID:                    86452d26-49f4-4443-9a9a-946a4639d8db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-wr424                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-024416                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-024416             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-024416    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-zmx89                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-024416             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-5q4ff              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node no-preload-024416 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-024416 event: Registered Node no-preload-024416 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-024416 event: Registered Node no-preload-024416 in Controller
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054415] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045527] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.644506] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.835887] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.687223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.533607] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.057321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067447] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.191649] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.173953] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.403466] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Apr 4 22:55] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.064927] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.564858] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +6.615700] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.463062] systemd-fstab-generator[1975]: Ignoring "noauto" option for root device
	[  +1.593430] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.509957] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] <==
	{"level":"info","ts":"2024-04-04T22:55:43.146653Z","caller":"traceutil/trace.go:171","msg":"trace[572749447] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:640; }","duration":"1.34895849s","start":"2024-04-04T22:55:41.797683Z","end":"2024-04-04T22:55:43.146642Z","steps":["trace[572749447] 'read index received'  (duration: 101.093202ms)","trace[572749447] 'applied index is now lower than readState.Index'  (duration: 1.24786375s)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:55:43.146833Z","caller":"traceutil/trace.go:171","msg":"trace[1028063896] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"1.396183504s","start":"2024-04-04T22:55:41.750642Z","end":"2024-04-04T22:55:43.146826Z","steps":["trace[1028063896] 'process raft request'  (duration: 275.631115ms)","trace[1028063896] 'compare'  (duration: 1.120086807s)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T22:55:43.146915Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:41.750626Z","time spent":"1.396254276s","remote":"127.0.0.1:51280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-024416\" mod_revision:586 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-024416\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-024416\" > >"}
	{"level":"warn","ts":"2024-04-04T22:55:43.147018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.218599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4280"}
	{"level":"info","ts":"2024-04-04T22:55:43.147141Z","caller":"traceutil/trace.go:171","msg":"trace[1578574406] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:605; }","duration":"281.370385ms","start":"2024-04-04T22:55:42.865759Z","end":"2024-04-04T22:55:43.147129Z","steps":["trace[1578574406] 'agreement among raft nodes before linearized reading'  (duration: 281.079729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.147292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.349627635s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4280"}
	{"level":"info","ts":"2024-04-04T22:55:43.147344Z","caller":"traceutil/trace.go:171","msg":"trace[348841512] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:605; }","duration":"1.349703059s","start":"2024-04-04T22:55:41.797633Z","end":"2024-04-04T22:55:43.147336Z","steps":["trace[348841512] 'agreement among raft nodes before linearized reading'  (duration: 1.349604297s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.147369Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:41.797619Z","time spent":"1.34974223s","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4304,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.147446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.54968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b374c84bf1\" ","response":"range_response_count:1 size:817"}
	{"level":"info","ts":"2024-04-04T22:55:43.147495Z","caller":"traceutil/trace.go:171","msg":"trace[437692337] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b374c84bf1; range_end:; response_count:1; response_revision:605; }","duration":"280.620438ms","start":"2024-04-04T22:55:42.866867Z","end":"2024-04-04T22:55:43.147488Z","steps":["trace[437692337] 'agreement among raft nodes before linearized reading'  (duration: 280.43041ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:55:43.601417Z","caller":"traceutil/trace.go:171","msg":"trace[714666392] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"371.142343ms","start":"2024-04-04T22:55:43.230255Z","end":"2024-04-04T22:55:43.601397Z","steps":["trace[714666392] 'read index received'  (duration: 369.676254ms)","trace[714666392] 'applied index is now lower than readState.Index'  (duration: 1.464874ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T22:55:43.601761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.477238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-04-04T22:55:43.60233Z","caller":"traceutil/trace.go:171","msg":"trace[52618212] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e; range_end:; response_count:1; response_revision:607; }","duration":"372.107455ms","start":"2024-04-04T22:55:43.230208Z","end":"2024-04-04T22:55:43.602315Z","steps":["trace[52618212] 'agreement among raft nodes before linearized reading'  (duration: 371.402308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602409Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.230196Z","time spent":"372.197163ms","remote":"127.0.0.1:51050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":964,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.601829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.322428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-04T22:55:43.60263Z","caller":"traceutil/trace.go:171","msg":"trace[46313904] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:607; }","duration":"372.138829ms","start":"2024-04-04T22:55:43.230478Z","end":"2024-04-04T22:55:43.602617Z","steps":["trace[46313904] 'agreement among raft nodes before linearized reading'  (duration: 371.305566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.230473Z","time spent":"372.19806ms","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4260,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.601923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.631656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T22:55:43.602863Z","caller":"traceutil/trace.go:171","msg":"trace[1549639602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"352.614577ms","start":"2024-04-04T22:55:43.250239Z","end":"2024-04-04T22:55:43.602853Z","steps":["trace[1549639602] 'agreement among raft nodes before linearized reading'  (duration: 351.664519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.250226Z","time spent":"352.657151ms","remote":"127.0.0.1:50952","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-04T22:55:43.601999Z","caller":"traceutil/trace.go:171","msg":"trace[341973304] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"446.329738ms","start":"2024-04-04T22:55:43.155618Z","end":"2024-04-04T22:55:43.601948Z","steps":["trace[341973304] 'process raft request'  (duration: 444.370786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.603214Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.155607Z","time spent":"447.553664ms","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4221,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" mod_revision:574 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" value_size:4155 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" > >"}
	{"level":"info","ts":"2024-04-04T23:05:19.554681Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":845}
	{"level":"info","ts":"2024-04-04T23:05:19.57241Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":845,"took":"16.472745ms","hash":833007506,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-04T23:05:19.572534Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":833007506,"revision":845,"compact-revision":-1}
	
	
	==> kernel <==
	 23:08:52 up 14 min,  0 users,  load average: 0.44, 0.24, 0.16
	Linux no-preload-024416 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] <==
	I0404 23:03:22.375272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:21.376286       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:21.376727       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:05:22.377473       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:22.377533       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:05:22.377543       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:22.377658       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:22.377781       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:05:22.378834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:06:22.378595       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:06:22.378662       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:06:22.378669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:06:22.379154       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:06:22.379385       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:06:22.380768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:08:22.379617       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:08:22.379986       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:08:22.380022       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:08:22.380906       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:08:22.381002       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:08:22.381112       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] <==
	I0404 23:03:07.006334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:03:36.520989       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:03:37.020620       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:04:06.527486       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:04:07.029686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:04:36.533255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:04:37.037671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:06.539631       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:07.046767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:36.545776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:37.056190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:06:06.551254       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:07.065242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:06:24.882318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="424.182µs"
	E0404 23:06:36.557895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:37.073690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:06:39.878605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="63.995µs"
	E0404 23:07:06.563650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:07.085522       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:07:36.568771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:37.093654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:08:06.574143       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:08:07.104347       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:08:36.579772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:08:37.113252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] <==
	I0404 22:55:24.416373       1 server_linux.go:69] "Using iptables proxy"
	I0404 22:55:24.463648       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.77"]
	I0404 22:55:24.551436       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0404 22:55:24.551555       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:55:24.551588       1 server_linux.go:165] "Using iptables Proxier"
	I0404 22:55:24.554803       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:55:24.555520       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0404 22:55:24.555572       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:55:24.558269       1 config.go:192] "Starting service config controller"
	I0404 22:55:24.558335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0404 22:55:24.558379       1 config.go:101] "Starting endpoint slice config controller"
	I0404 22:55:24.558402       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0404 22:55:24.559691       1 config.go:319] "Starting node config controller"
	I0404 22:55:24.562750       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0404 22:55:24.658915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0404 22:55:24.659031       1 shared_informer.go:320] Caches are synced for service config
	I0404 22:55:24.663225       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] <==
	I0404 22:55:19.201712       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:55:21.282147       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:55:21.282233       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:55:21.282264       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:55:21.282287       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:55:21.343623       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0404 22:55:21.343709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:55:21.347207       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:55:21.347263       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:55:21.348254       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:55:21.348357       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0404 22:55:21.380226       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 22:55:21.380331       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0404 22:55:22.948041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:06:16 no-preload-024416 kubelet[1351]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:06:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:06:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:06:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:06:24 no-preload-024416 kubelet[1351]: E0404 23:06:24.862295    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:06:39 no-preload-024416 kubelet[1351]: E0404 23:06:39.860535    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:06:54 no-preload-024416 kubelet[1351]: E0404 23:06:54.865399    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:07:09 no-preload-024416 kubelet[1351]: E0404 23:07:09.860276    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:07:16 no-preload-024416 kubelet[1351]: E0404 23:07:16.902345    1351 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 04 23:07:16 no-preload-024416 kubelet[1351]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:07:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:07:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:07:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:07:22 no-preload-024416 kubelet[1351]: E0404 23:07:22.862361    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:07:35 no-preload-024416 kubelet[1351]: E0404 23:07:35.861517    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:07:50 no-preload-024416 kubelet[1351]: E0404 23:07:50.860961    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:08:04 no-preload-024416 kubelet[1351]: E0404 23:08:04.863507    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:08:16 no-preload-024416 kubelet[1351]: E0404 23:08:16.900193    1351 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 04 23:08:16 no-preload-024416 kubelet[1351]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:08:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:08:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:08:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:08:18 no-preload-024416 kubelet[1351]: E0404 23:08:18.861746    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:08:31 no-preload-024416 kubelet[1351]: E0404 23:08:31.861859    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:08:43 no-preload-024416 kubelet[1351]: E0404 23:08:43.861448    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	
	
	==> storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] <==
	I0404 22:55:55.242192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 22:55:55.259233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 22:55:55.259437       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 22:56:12.661912       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 22:56:12.662159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c!
	I0404 22:56:12.666334       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d6efdd0-78db-41ac-b46f-f7e4d5ce265a", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c became leader
	I0404 22:56:12.763188       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c!
	
	
	==> storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] <==
	I0404 22:55:24.079479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0404 22:55:54.088392       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-024416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5q4ff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff: exit status 1 (62.581315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5q4ff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0404 23:01:24.216691   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 23:01:30.506610   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 23:02:08.250227   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 23:02:47.262355   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 23:02:53.552270   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 23:03:09.142529   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:09:58.235887362 +0000 UTC m=+6054.098873069
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-952083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-952083 logs -n 25: (2.289447229s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.959634639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c46228d-73f1-408b-86f4-78e84c8625fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.960014154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c46228d-73f1-408b-86f4-78e84c8625fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.961082561Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5101e023-85f8-44c6-a376-ded8e256b7c8 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.961289795Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1712271654985799906,StartedAt:1712271655011592399,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/acab1107-bd9a-4767-bbcd-705faf9e4dea/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/acab1107-bd9a-4767-bbcd-705faf9e4dea/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/acab1107-bd9a-4767-bbcd-705faf9e4dea/containers/coredns/1bc72384,Readonly:false,SelinuxRelabel:false,
Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/acab1107-bd9a-4767-bbcd-705faf9e4dea/volumes/kubernetes.io~projected/kube-api-access-2864s,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-vnzlh_acab1107-bd9a-4767-bbcd-705faf9e4dea/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5101e023-85f8-44c6-a376-ded8e256b7c8 name=/runtime.v1.RuntimeService/Container
Status
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.962064959Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,Verbose:false,}" file="otel-collector/interceptors.go:62" id=61725c2c-bcdb-411e-b3b4-59b52587bfae name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.962530751Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1712271654542188361,StartedAt:1712271654635438384,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a/containers/kube-proxy/2a64b875,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,Host
Path:/var/lib/kubelet/pods/6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a/volumes/kubernetes.io~projected/kube-api-access-hbz4v,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-lbw9b_6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" fi
le="otel-collector/interceptors.go:74" id=61725c2c-bcdb-411e-b3b4-59b52587bfae name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.963373277Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1d732994-87ed-4d1e-9a1f-0bd4b76433ee name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.963599721Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1712271654436666311,StartedAt:1712271654554481076,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0b001dd3-825c-43ed-903d-669afc75f79c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0b001dd3-825c-43ed-903d-669afc75f79c/containers/storage-provisioner/3c6d3fdc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0b001dd3-825c-43ed-903d-669afc75f79c/volumes/kubernetes.io~projected/kube-api-access-5hm6w,Readonly:true,SelinuxR
elabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_0b001dd3-825c-43ed-903d-669afc75f79c/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1d732994-87ed-4d1e-9a1f-0bd4b76433ee name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.965027979Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=32a05f20-c090-4a01-97bb-3585b0de174c name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.965283215Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1712271654332267416,StartedAt:1712271654444874788,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/dcc43d3e-d639-462b-81f1-d4abcdcdbe91/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dcc43d3e-d639-462b-81f1-d4abcdcdbe91/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dcc43d3e-d639-462b-81f1-d4abcdcdbe91/containers/coredns/8f1a01d8,Readonly:false,SelinuxRelabel:false,
Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/dcc43d3e-d639-462b-81f1-d4abcdcdbe91/volumes/kubernetes.io~projected/kube-api-access-cqpjn,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-t2l7m_dcc43d3e-d639-462b-81f1-d4abcdcdbe91/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=32a05f20-c090-4a01-97bb-3585b0de174c name=/runtime.v1.RuntimeService/Container
Status
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.965976788Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9ceeeb6b-4637-4c12-b879-d98691c44da5 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.966138222Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1712271633487198508,StartedAt:1712271633608053374,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9822b8713333441e2a7a7ef7e60a1807/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9822b8713333441e2a7a7ef7e60a1807/containers/kube-scheduler/2ac021f2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-952083_9822b8713333441e2a7a7ef7e60a1807/kube-scheduler/2.log,Resources:&ContainerResources{Lin
ux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9ceeeb6b-4637-4c12-b879-d98691c44da5 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.966892882Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0d5f3839-9507-44ca-a731-7a4313437c69 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.967078648Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1712271633403607336,StartedAt:1712271633467562176,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f9190b1dfcc94d08c85e02314ffdfe51/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f9190b1dfcc94d08c85e02314ffdfe51/containers/etcd/e13ea96f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/
pods/kube-system_etcd-default-k8s-diff-port-952083_f9190b1dfcc94d08c85e02314ffdfe51/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0d5f3839-9507-44ca-a731-7a4313437c69 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.968144074Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f14ca141-9a74-46f6-b438-e45f3391900f name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.968619264Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1712271633378356839,StartedAt:1712271633504326721,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/afd80574e47ff311ef88779c9104c783/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/afd80574e47ff311ef88779c9104c783/containers/kube-controller-manager/7dbe31c0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagati
on:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-952083_afd80574e47ff311ef88779c9104c783/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,Oom
ScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f14ca141-9a74-46f6-b438-e45f3391900f name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.969669667Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=85895f13-67a0-481f-a46b-0702d6f4fea0 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.969996876Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1712271633306322818,StartedAt:1712271633396292830,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/51c4dd72e0a1404b78b3fc33934e70a2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/51c4dd72e0a1404b78b3fc33934e70a2/containers/kube-apiserver/ba5faa60,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapp
ing{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-952083_51c4dd72e0a1404b78b3fc33934e70a2/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=85895f13-67a0-481f-a46b-0702d6f4fea0 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.973863965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=027c2b4d-cd28-4199-88b2-1806ce89fec7 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.974310602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=027c2b4d-cd28-4199-88b2-1806ce89fec7 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.976454084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8175f389-2517-4d94-a96f-99762cccbec5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.977694983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272199977675740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8175f389-2517-4d94-a96f-99762cccbec5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.979327823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4adf3cee-6390-43b7-9b6a-e868473350fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.979462714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4adf3cee-6390-43b7-9b6a-e868473350fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:09:59 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:09:59.980903776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4adf3cee-6390-43b7-9b6a-e868473350fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fe0f596b810af       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1e606d410069f       coredns-76f75df574-vnzlh
	9948bf2c9f2cb       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   defd4ff15641e       kube-proxy-lbw9b
	7558f6eadded1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b74175d4d116a       storage-provisioner
	667d376fb5c7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   31228057f26cc       coredns-76f75df574-t2l7m
	a93a3fad2e101       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   475c99bc935a4       etcd-default-k8s-diff-port-952083
	75c86ec55c075       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   7120bb2ac9655       kube-scheduler-default-k8s-diff-port-952083
	66f3e9fe1de46       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   310f3182fd0f5       kube-controller-manager-default-k8s-diff-port-952083
	9291b35e905cd       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   5a065425e27e0       kube-apiserver-default-k8s-diff-port-952083
	c1b326420aa17       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   590a7c82d2f70       kube-apiserver-default-k8s-diff-port-952083
	
	
	==> coredns [667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-952083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-952083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=default-k8s-diff-port-952083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 23:00:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-952083
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:09:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:06:05 +0000   Thu, 04 Apr 2024 23:00:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.148
	  Hostname:    default-k8s-diff-port-952083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3649f3e14ef44d7e8df583f4502764e9
	  System UUID:                3649f3e1-4ef4-4d7e-8df5-83f4502764e9
	  Boot ID:                    9732efbd-d50a-4d8b-b568-3a2b2b2b3406
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-t2l7m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-76f75df574-vnzlh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-952083                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-952083             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-952083    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-lbw9b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-952083             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-szq42                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-952083 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m11s                  kubelet          Node default-k8s-diff-port-952083 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-952083 event: Registered Node default-k8s-diff-port-952083 in Controller
	
	
	==> dmesg <==
	[  +0.059984] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.073712] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.129299] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.710553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.102621] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.061699] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074102] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.191005] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.148878] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.325050] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.713532] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.064544] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.679661] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +5.597929] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.436594] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 4 23:00] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.044940] systemd-fstab-generator[3619]: Ignoring "noauto" option for root device
	[  +4.511666] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.289982] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[ +13.894364] systemd-fstab-generator[4149]: Ignoring "noauto" option for root device
	[  +0.114354] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 4 23:01] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873] <==
	{"level":"info","ts":"2024-04-04T23:00:33.637828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf switched to configuration voters=(15985748145550586303)"}
	{"level":"info","ts":"2024-04-04T23:00:33.640969Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8362fc97c8dc7c","local-member-id":"ddd8c93e0466f1bf","added-peer-id":"ddd8c93e0466f1bf","added-peer-peer-urls":["https://192.168.72.148:2380"]}
	{"level":"info","ts":"2024-04-04T23:00:33.676882Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-04T23:00:33.677144Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ddd8c93e0466f1bf","initial-advertise-peer-urls":["https://192.168.72.148:2380"],"listen-peer-urls":["https://192.168.72.148:2380"],"advertise-client-urls":["https://192.168.72.148:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.148:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-04T23:00:33.677309Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-04T23:00:33.67986Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.148:2380"}
	{"level":"info","ts":"2024-04-04T23:00:33.679952Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.148:2380"}
	{"level":"info","ts":"2024-04-04T23:00:33.879848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-04T23:00:33.879978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-04T23:00:33.880042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf received MsgPreVoteResp from ddd8c93e0466f1bf at term 1"}
	{"level":"info","ts":"2024-04-04T23:00:33.880088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf became candidate at term 2"}
	{"level":"info","ts":"2024-04-04T23:00:33.880124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf received MsgVoteResp from ddd8c93e0466f1bf at term 2"}
	{"level":"info","ts":"2024-04-04T23:00:33.880192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ddd8c93e0466f1bf became leader at term 2"}
	{"level":"info","ts":"2024-04-04T23:00:33.880232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ddd8c93e0466f1bf elected leader ddd8c93e0466f1bf at term 2"}
	{"level":"info","ts":"2024-04-04T23:00:33.885041Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ddd8c93e0466f1bf","local-member-attributes":"{Name:default-k8s-diff-port-952083 ClientURLs:[https://192.168.72.148:2379]}","request-path":"/0/members/ddd8c93e0466f1bf/attributes","cluster-id":"8362fc97c8dc7c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-04T23:00:33.88567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T23:00:33.885764Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.885817Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T23:00:33.894011Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8362fc97c8dc7c","local-member-id":"ddd8c93e0466f1bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.886053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T23:00:33.889499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.148:2379"}
	{"level":"info","ts":"2024-04-04T23:00:33.896947Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.896995Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.898547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T23:00:33.898933Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:10:00 up 14 min,  0 users,  load average: 0.07, 0.26, 0.24
	Linux default-k8s-diff-port-952083 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d] <==
	I0404 23:03:54.747812       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:35.905319       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:35.905689       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:05:36.905955       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:36.906067       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:05:36.906080       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:05:36.905974       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:05:36.906151       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:05:36.907280       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:06:36.906578       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:06:36.906772       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:06:36.906787       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:06:36.907646       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:06:36.907691       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:06:36.907844       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:08:36.907839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:08:36.908040       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:08:36.908060       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:08:36.908217       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:08:36.908353       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:08:36.910133       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9] <==
	W0404 23:00:24.995866       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.001605       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.020658       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.025492       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.027021       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.067356       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.148712       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.164299       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.197897       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.206022       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.234151       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.235546       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.251529       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.299194       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.313983       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.528111       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.566009       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.587433       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:26.206546       1 logging.go:59] [core] [Channel #196 SubChannel #197] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:28.656824       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.334604       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.369238       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.585116       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.613043       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.646115       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188] <==
	I0404 23:04:22.111702       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:04:51.651794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:04:52.125151       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:21.658816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:22.135216       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:05:51.666283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:05:52.145965       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:06:21.673517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:22.154681       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:06:45.324188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="514.308µs"
	E0404 23:06:51.681145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:06:52.164640       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:06:59.317558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129.548µs"
	E0404 23:07:21.686189       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:22.173249       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:07:51.691581       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:07:52.186422       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:08:21.698255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:08:22.194588       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:08:51.705000       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:08:52.205418       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:09:21.709666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:09:22.213970       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:09:51.716495       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:09:52.223942       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853] <==
	I0404 23:00:54.921444       1 server_others.go:72] "Using iptables proxy"
	I0404 23:00:54.964361       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.148"]
	I0404 23:00:55.080461       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 23:00:55.080610       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 23:00:55.080702       1 server_others.go:168] "Using iptables Proxier"
	I0404 23:00:55.084135       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 23:00:55.085113       1 server.go:865] "Version info" version="v1.29.3"
	I0404 23:00:55.085236       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 23:00:55.088468       1 config.go:188] "Starting service config controller"
	I0404 23:00:55.088572       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 23:00:55.088895       1 config.go:97] "Starting endpoint slice config controller"
	I0404 23:00:55.088967       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 23:00:55.089962       1 config.go:315] "Starting node config controller"
	I0404 23:00:55.091214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 23:00:55.092161       1 shared_informer.go:318] Caches are synced for node config
	I0404 23:00:55.189221       1 shared_informer.go:318] Caches are synced for service config
	I0404 23:00:55.190475       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e] <==
	W0404 23:00:35.958619       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:35.958648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:36.762958       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0404 23:00:36.763027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0404 23:00:36.817605       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 23:00:36.817663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0404 23:00:36.973441       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0404 23:00:36.973488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0404 23:00:36.991267       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 23:00:36.991321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 23:00:36.992491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:36.992538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.017206       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 23:00:37.017295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 23:00:37.036058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 23:00:37.036149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 23:00:37.065471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:37.065523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.112123       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 23:00:37.112282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 23:00:37.196659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:37.196706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.432079       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 23:00:37.432133       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0404 23:00:40.241454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:07:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:07:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:07:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:07:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:07:48 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:07:48.299113    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:07:59 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:07:59.302179    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:08:10 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:10.298566    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:08:24 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:24.298512    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:08:35 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:35.302331    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:08:39 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:39.365200    3952 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:08:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:08:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:08:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:08:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:08:48 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:48.298130    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:08:59 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:08:59.298907    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:09:10 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:09:10.298918    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:09:24 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:09:24.298521    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:09:35 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:09:35.298958    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:09:39 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:09:39.365635    3952 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:09:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:09:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:09:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:09:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:09:50 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:09:50.299609    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	
	
	==> storage-provisioner [7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2] <==
	I0404 23:00:54.630574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 23:00:54.652942       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 23:00:54.653256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 23:00:54.683495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 23:00:54.685178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f!
	I0404 23:00:54.688291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfa93024-1c7d-427e-8f35-daa7a4fc8fec", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f became leader
	I0404 23:00:54.785662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-szq42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42: exit status 1 (67.089867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-szq42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:03:31.296000   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:03:48.670027   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:03:50.480634   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:04:13.465208   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:04:36.908414   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:05:11.717687   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:05:23.599305   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:05:36.510633   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:05:59.954589   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:06:24.216506   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:06:30.506689   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:06:46.644940   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:06:53.535375   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:07:08.250839   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:08:09.142514   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:08:48.670053   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:09:13.464633   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:09:36.908438   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:10:23.599044   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:11:12.189494   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:11:24.217161   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:11:30.505842   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:12:08.250842   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (256.171032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-343162" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (242.501602ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25: (1.586001344s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.443909361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272349443885078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d219c2a-429b-4d47-ad72-3fa38207c95c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.444678083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf820cf7-b456-4a89-b852-b03ffe45effb name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.444736976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf820cf7-b456-4a89-b852-b03ffe45effb name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.444766217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bf820cf7-b456-4a89-b852-b03ffe45effb name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.482161785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8c8e7fb-4449-4fa6-a300-54e786cd2e82 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.482262755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8c8e7fb-4449-4fa6-a300-54e786cd2e82 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.483772558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f639a93-94ff-4b30-b081-4a133ae28065 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.484318279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272349484289390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f639a93-94ff-4b30-b081-4a133ae28065 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.484984740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a68f6dc7-67d6-4c89-9fdf-bf10fe33f131 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.485050536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a68f6dc7-67d6-4c89-9fdf-bf10fe33f131 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.485112640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a68f6dc7-67d6-4c89-9fdf-bf10fe33f131 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.520899895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9af3afc1-b4e9-42e9-913c-9fe55f03f55d name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.520974120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9af3afc1-b4e9-42e9-913c-9fe55f03f55d name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.522210304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a432aa0-dece-4c6b-aecf-eb5926bd2d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.522692360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272349522664014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a432aa0-dece-4c6b-aecf-eb5926bd2d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.523444250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adb49dec-039f-441b-88da-ebf85c6a0b12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.523550055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adb49dec-039f-441b-88da-ebf85c6a0b12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.523593020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=adb49dec-039f-441b-88da-ebf85c6a0b12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.556288580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afb332d2-2e11-4fa1-bb11-2ff571098ea9 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.556377679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afb332d2-2e11-4fa1-bb11-2ff571098ea9 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.557735436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcadd97e-8a99-4079-9076-5dfdd0402102 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.558105978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272349558081154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcadd97e-8a99-4079-9076-5dfdd0402102 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.558728922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81951ce3-0af5-4eb8-b9de-842879e0e62c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.558789411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81951ce3-0af5-4eb8-b9de-842879e0e62c name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:12:29 old-k8s-version-343162 crio[651]: time="2024-04-04 23:12:29.558824105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=81951ce3-0af5-4eb8-b9de-842879e0e62c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041693] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 4 22:55] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.993320] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.724891] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.065551] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096758] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.200608] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.163985] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312462] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.410618] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.075387] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.725033] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +11.687086] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 4 22:59] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 4 23:01] systemd-fstab-generator[5232]: Ignoring "noauto" option for root device
	[  +0.067974] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:12:29 up 17 min,  0 users,  load average: 0.38, 0.13, 0.04
	Linux old-k8s-version-343162 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: net.(*sysDialer).dialSerial(0xc00015ff80, 0x4f7fe40, 0xc000d5aae0, 0xc000d54930, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/dial.go:548 +0x152
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: net.(*Dialer).DialContext(0xc0001e8240, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0005156e0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000afaa20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0005156e0, 0x24, 0x60, 0x7fc20180c8f8, 0x118, ...)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: net/http.(*Transport).dial(0xc000a50000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0005156e0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: net/http.(*Transport).dialConn(0xc000a50000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00039a600, 0x5, 0xc0005156e0, 0x24, 0x0, 0xc00067d200, ...)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: net/http.(*Transport).dialConnFor(0xc000a50000, 0xc00066a630)
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]: created by net/http.(*Transport).queueForDial
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6406]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 04 23:12:24 old-k8s-version-343162 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 04 23:12:24 old-k8s-version-343162 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 04 23:12:24 old-k8s-version-343162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 04 23:12:24 old-k8s-version-343162 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 04 23:12:24 old-k8s-version-343162 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6416]: I0404 23:12:24.970917    6416 server.go:416] Version: v1.20.0
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6416]: I0404 23:12:24.971405    6416 server.go:837] Client rotation is on, will bootstrap in background
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6416]: I0404 23:12:24.974801    6416 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6416]: W0404 23:12:24.976368    6416 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 04 23:12:24 old-k8s-version-343162 kubelet[6416]: I0404 23:12:24.977291    6416 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (247.999754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-343162" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (411.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-143118 -n embed-certs-143118
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:15:08.608594506 +0000 UTC m=+6364.471580220
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-143118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-143118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.323µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-143118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-143118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-143118 logs -n 25: (1.406030896s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:14 UTC |
	| start   | -p newest-cni-037368 --memory=2200 --alsologtostderr   | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:14 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 23:14:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 23:14:30.870024   70163 out.go:291] Setting OutFile to fd 1 ...
	I0404 23:14:30.870269   70163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:14:30.870292   70163 out.go:304] Setting ErrFile to fd 2...
	I0404 23:14:30.870306   70163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:14:30.870801   70163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 23:14:30.871498   70163 out.go:298] Setting JSON to false
	I0404 23:14:30.872458   70163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7016,"bootTime":1712265455,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 23:14:30.872535   70163 start.go:139] virtualization: kvm guest
	I0404 23:14:30.875964   70163 out.go:177] * [newest-cni-037368] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 23:14:30.877646   70163 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 23:14:30.877641   70163 notify.go:220] Checking for updates...
	I0404 23:14:30.879382   70163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 23:14:30.881062   70163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:14:30.883864   70163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:30.885655   70163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 23:14:30.887559   70163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 23:14:30.889889   70163 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:14:30.889991   70163 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:14:30.890113   70163 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 23:14:30.890250   70163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 23:14:30.929805   70163 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 23:14:30.931316   70163 start.go:297] selected driver: kvm2
	I0404 23:14:30.931336   70163 start.go:901] validating driver "kvm2" against <nil>
	I0404 23:14:30.931349   70163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 23:14:30.932020   70163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:14:30.932096   70163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 23:14:30.949067   70163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 23:14:30.949143   70163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0404 23:14:30.949181   70163 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0404 23:14:30.949496   70163 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0404 23:14:30.949581   70163 cni.go:84] Creating CNI manager for ""
	I0404 23:14:30.949600   70163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:14:30.949617   70163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 23:14:30.949700   70163 start.go:340] cluster config:
	{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 23:14:30.949819   70163 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:14:30.952761   70163 out.go:177] * Starting "newest-cni-037368" primary control-plane node in "newest-cni-037368" cluster
	I0404 23:14:30.954443   70163 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 23:14:30.954511   70163 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0404 23:14:30.954523   70163 cache.go:56] Caching tarball of preloaded images
	I0404 23:14:30.954615   70163 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 23:14:30.954631   70163 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0404 23:14:30.954760   70163 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json ...
	I0404 23:14:30.954791   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json: {Name:mkdf5e70da216e38ff3343882e17305528e61904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:14:30.954952   70163 start.go:360] acquireMachinesLock for newest-cni-037368: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 23:14:30.954987   70163 start.go:364] duration metric: took 19.143µs to acquireMachinesLock for "newest-cni-037368"
	I0404 23:14:30.955011   70163 start.go:93] Provisioning new machine with config: &{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:14:30.955096   70163 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 23:14:30.956983   70163 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 23:14:30.957155   70163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:14:30.957292   70163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:14:30.973171   70163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0404 23:14:30.973710   70163 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:14:30.974232   70163 main.go:141] libmachine: Using API Version  1
	I0404 23:14:30.974251   70163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:14:30.974654   70163 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:14:30.974912   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetMachineName
	I0404 23:14:30.975088   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:30.975254   70163 start.go:159] libmachine.API.Create for "newest-cni-037368" (driver="kvm2")
	I0404 23:14:30.975284   70163 client.go:168] LocalClient.Create starting
	I0404 23:14:30.975318   70163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 23:14:30.975355   70163 main.go:141] libmachine: Decoding PEM data...
	I0404 23:14:30.975370   70163 main.go:141] libmachine: Parsing certificate...
	I0404 23:14:30.975419   70163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 23:14:30.975446   70163 main.go:141] libmachine: Decoding PEM data...
	I0404 23:14:30.975460   70163 main.go:141] libmachine: Parsing certificate...
	I0404 23:14:30.975475   70163 main.go:141] libmachine: Running pre-create checks...
	I0404 23:14:30.975490   70163 main.go:141] libmachine: (newest-cni-037368) Calling .PreCreateCheck
	I0404 23:14:30.975793   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetConfigRaw
	I0404 23:14:30.976276   70163 main.go:141] libmachine: Creating machine...
	I0404 23:14:30.976292   70163 main.go:141] libmachine: (newest-cni-037368) Calling .Create
	I0404 23:14:30.976409   70163 main.go:141] libmachine: (newest-cni-037368) Creating KVM machine...
	I0404 23:14:30.977746   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found existing default KVM network
	I0404 23:14:30.979255   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:30.979102   70186 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0404 23:14:30.979302   70163 main.go:141] libmachine: (newest-cni-037368) DBG | created network xml: 
	I0404 23:14:30.979317   70163 main.go:141] libmachine: (newest-cni-037368) DBG | <network>
	I0404 23:14:30.979325   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <name>mk-newest-cni-037368</name>
	I0404 23:14:30.979348   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <dns enable='no'/>
	I0404 23:14:30.979357   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   
	I0404 23:14:30.979371   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 23:14:30.979383   70163 main.go:141] libmachine: (newest-cni-037368) DBG |     <dhcp>
	I0404 23:14:30.979406   70163 main.go:141] libmachine: (newest-cni-037368) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 23:14:30.979416   70163 main.go:141] libmachine: (newest-cni-037368) DBG |     </dhcp>
	I0404 23:14:30.979430   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   </ip>
	I0404 23:14:30.979444   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   
	I0404 23:14:30.979458   70163 main.go:141] libmachine: (newest-cni-037368) DBG | </network>
	I0404 23:14:30.979468   70163 main.go:141] libmachine: (newest-cni-037368) DBG | 
	I0404 23:14:30.985185   70163 main.go:141] libmachine: (newest-cni-037368) DBG | trying to create private KVM network mk-newest-cni-037368 192.168.39.0/24...
	I0404 23:14:31.060874   70163 main.go:141] libmachine: (newest-cni-037368) DBG | private KVM network mk-newest-cni-037368 192.168.39.0/24 created
	I0404 23:14:31.060910   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.060790   70186 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:31.060922   70163 main.go:141] libmachine: (newest-cni-037368) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 ...
	I0404 23:14:31.060971   70163 main.go:141] libmachine: (newest-cni-037368) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 23:14:31.061011   70163 main.go:141] libmachine: (newest-cni-037368) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 23:14:31.285537   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.285381   70186 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa...
	I0404 23:14:31.524567   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.524404   70186 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/newest-cni-037368.rawdisk...
	I0404 23:14:31.524597   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Writing magic tar header
	I0404 23:14:31.524615   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Writing SSH key tar header
	I0404 23:14:31.524625   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.524541   70186 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 ...
	I0404 23:14:31.524685   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368
	I0404 23:14:31.524714   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 23:14:31.524730   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:31.524747   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 (perms=drwx------)
	I0404 23:14:31.524761   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 23:14:31.524775   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 23:14:31.524788   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins
	I0404 23:14:31.524803   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home
	I0404 23:14:31.524814   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 23:14:31.524823   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Skipping /home - not owner
	I0404 23:14:31.524838   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 23:14:31.524857   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 23:14:31.524878   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 23:14:31.524893   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 23:14:31.524909   70163 main.go:141] libmachine: (newest-cni-037368) Creating domain...
	I0404 23:14:31.526135   70163 main.go:141] libmachine: (newest-cni-037368) define libvirt domain using xml: 
	I0404 23:14:31.526169   70163 main.go:141] libmachine: (newest-cni-037368) <domain type='kvm'>
	I0404 23:14:31.526176   70163 main.go:141] libmachine: (newest-cni-037368)   <name>newest-cni-037368</name>
	I0404 23:14:31.526181   70163 main.go:141] libmachine: (newest-cni-037368)   <memory unit='MiB'>2200</memory>
	I0404 23:14:31.526187   70163 main.go:141] libmachine: (newest-cni-037368)   <vcpu>2</vcpu>
	I0404 23:14:31.526199   70163 main.go:141] libmachine: (newest-cni-037368)   <features>
	I0404 23:14:31.526207   70163 main.go:141] libmachine: (newest-cni-037368)     <acpi/>
	I0404 23:14:31.526215   70163 main.go:141] libmachine: (newest-cni-037368)     <apic/>
	I0404 23:14:31.526249   70163 main.go:141] libmachine: (newest-cni-037368)     <pae/>
	I0404 23:14:31.526275   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526294   70163 main.go:141] libmachine: (newest-cni-037368)   </features>
	I0404 23:14:31.526310   70163 main.go:141] libmachine: (newest-cni-037368)   <cpu mode='host-passthrough'>
	I0404 23:14:31.526335   70163 main.go:141] libmachine: (newest-cni-037368)   
	I0404 23:14:31.526347   70163 main.go:141] libmachine: (newest-cni-037368)   </cpu>
	I0404 23:14:31.526356   70163 main.go:141] libmachine: (newest-cni-037368)   <os>
	I0404 23:14:31.526363   70163 main.go:141] libmachine: (newest-cni-037368)     <type>hvm</type>
	I0404 23:14:31.526386   70163 main.go:141] libmachine: (newest-cni-037368)     <boot dev='cdrom'/>
	I0404 23:14:31.526401   70163 main.go:141] libmachine: (newest-cni-037368)     <boot dev='hd'/>
	I0404 23:14:31.526414   70163 main.go:141] libmachine: (newest-cni-037368)     <bootmenu enable='no'/>
	I0404 23:14:31.526421   70163 main.go:141] libmachine: (newest-cni-037368)   </os>
	I0404 23:14:31.526433   70163 main.go:141] libmachine: (newest-cni-037368)   <devices>
	I0404 23:14:31.526443   70163 main.go:141] libmachine: (newest-cni-037368)     <disk type='file' device='cdrom'>
	I0404 23:14:31.526459   70163 main.go:141] libmachine: (newest-cni-037368)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/boot2docker.iso'/>
	I0404 23:14:31.526468   70163 main.go:141] libmachine: (newest-cni-037368)       <target dev='hdc' bus='scsi'/>
	I0404 23:14:31.526489   70163 main.go:141] libmachine: (newest-cni-037368)       <readonly/>
	I0404 23:14:31.526510   70163 main.go:141] libmachine: (newest-cni-037368)     </disk>
	I0404 23:14:31.526520   70163 main.go:141] libmachine: (newest-cni-037368)     <disk type='file' device='disk'>
	I0404 23:14:31.526537   70163 main.go:141] libmachine: (newest-cni-037368)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 23:14:31.526556   70163 main.go:141] libmachine: (newest-cni-037368)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/newest-cni-037368.rawdisk'/>
	I0404 23:14:31.526568   70163 main.go:141] libmachine: (newest-cni-037368)       <target dev='hda' bus='virtio'/>
	I0404 23:14:31.526578   70163 main.go:141] libmachine: (newest-cni-037368)     </disk>
	I0404 23:14:31.526591   70163 main.go:141] libmachine: (newest-cni-037368)     <interface type='network'>
	I0404 23:14:31.526605   70163 main.go:141] libmachine: (newest-cni-037368)       <source network='mk-newest-cni-037368'/>
	I0404 23:14:31.526621   70163 main.go:141] libmachine: (newest-cni-037368)       <model type='virtio'/>
	I0404 23:14:31.526634   70163 main.go:141] libmachine: (newest-cni-037368)     </interface>
	I0404 23:14:31.526655   70163 main.go:141] libmachine: (newest-cni-037368)     <interface type='network'>
	I0404 23:14:31.526669   70163 main.go:141] libmachine: (newest-cni-037368)       <source network='default'/>
	I0404 23:14:31.526690   70163 main.go:141] libmachine: (newest-cni-037368)       <model type='virtio'/>
	I0404 23:14:31.526706   70163 main.go:141] libmachine: (newest-cni-037368)     </interface>
	I0404 23:14:31.526720   70163 main.go:141] libmachine: (newest-cni-037368)     <serial type='pty'>
	I0404 23:14:31.526729   70163 main.go:141] libmachine: (newest-cni-037368)       <target port='0'/>
	I0404 23:14:31.526735   70163 main.go:141] libmachine: (newest-cni-037368)     </serial>
	I0404 23:14:31.526742   70163 main.go:141] libmachine: (newest-cni-037368)     <console type='pty'>
	I0404 23:14:31.526747   70163 main.go:141] libmachine: (newest-cni-037368)       <target type='serial' port='0'/>
	I0404 23:14:31.526751   70163 main.go:141] libmachine: (newest-cni-037368)     </console>
	I0404 23:14:31.526757   70163 main.go:141] libmachine: (newest-cni-037368)     <rng model='virtio'>
	I0404 23:14:31.526763   70163 main.go:141] libmachine: (newest-cni-037368)       <backend model='random'>/dev/random</backend>
	I0404 23:14:31.526771   70163 main.go:141] libmachine: (newest-cni-037368)     </rng>
	I0404 23:14:31.526777   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526793   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526806   70163 main.go:141] libmachine: (newest-cni-037368)   </devices>
	I0404 23:14:31.526815   70163 main.go:141] libmachine: (newest-cni-037368) </domain>
	I0404 23:14:31.526821   70163 main.go:141] libmachine: (newest-cni-037368) 
	I0404 23:14:31.530977   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:23:3c:1f in network default
	I0404 23:14:31.531540   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring networks are active...
	I0404 23:14:31.531562   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:31.532384   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring network default is active
	I0404 23:14:31.532762   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring network mk-newest-cni-037368 is active
	I0404 23:14:31.533579   70163 main.go:141] libmachine: (newest-cni-037368) Getting domain xml...
	I0404 23:14:31.534419   70163 main.go:141] libmachine: (newest-cni-037368) Creating domain...
	I0404 23:14:32.809020   70163 main.go:141] libmachine: (newest-cni-037368) Waiting to get IP...
	I0404 23:14:32.810157   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:32.810614   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:32.810645   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:32.810575   70186 retry.go:31] will retry after 210.655771ms: waiting for machine to come up
	I0404 23:14:33.023279   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.023794   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.023820   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.023743   70186 retry.go:31] will retry after 260.218627ms: waiting for machine to come up
	I0404 23:14:33.285561   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.286096   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.286135   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.286054   70186 retry.go:31] will retry after 463.837334ms: waiting for machine to come up
	I0404 23:14:33.751872   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.752323   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.752355   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.752269   70186 retry.go:31] will retry after 449.398418ms: waiting for machine to come up
	I0404 23:14:34.202847   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:34.203383   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:34.203411   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:34.203335   70186 retry.go:31] will retry after 648.432709ms: waiting for machine to come up
	I0404 23:14:34.853129   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:34.853654   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:34.853686   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:34.853591   70186 retry.go:31] will retry after 646.996164ms: waiting for machine to come up
	I0404 23:14:35.502350   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:35.502916   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:35.502961   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:35.502870   70186 retry.go:31] will retry after 756.555637ms: waiting for machine to come up
	I0404 23:14:36.261479   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:36.261922   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:36.261948   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:36.261875   70186 retry.go:31] will retry after 1.321472833s: waiting for machine to come up
	I0404 23:14:37.585353   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:37.585877   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:37.585908   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:37.585782   70186 retry.go:31] will retry after 1.172339634s: waiting for machine to come up
	I0404 23:14:38.760166   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:38.760836   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:38.760870   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:38.760738   70186 retry.go:31] will retry after 2.322289196s: waiting for machine to come up
	I0404 23:14:41.084284   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:41.084881   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:41.084910   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:41.084812   70186 retry.go:31] will retry after 1.965671087s: waiting for machine to come up
	I0404 23:14:43.052166   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:43.052749   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:43.052783   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:43.052681   70186 retry.go:31] will retry after 2.736599487s: waiting for machine to come up
	I0404 23:14:45.790853   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:45.791272   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:45.791299   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:45.791226   70186 retry.go:31] will retry after 3.579403426s: waiting for machine to come up
	I0404 23:14:49.372474   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:49.373011   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:49.373035   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:49.372958   70186 retry.go:31] will retry after 5.429595005s: waiting for machine to come up
	I0404 23:14:54.803750   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:54.804322   70163 main.go:141] libmachine: (newest-cni-037368) Found IP for machine: 192.168.39.64
	I0404 23:14:54.804368   70163 main.go:141] libmachine: (newest-cni-037368) Reserving static IP address...
	I0404 23:14:54.804387   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has current primary IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:54.804859   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find host DHCP lease matching {name: "newest-cni-037368", mac: "52:54:00:28:28:2c", ip: "192.168.39.64"} in network mk-newest-cni-037368
	I0404 23:14:54.882752   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Getting to WaitForSSH function...
	I0404 23:14:54.882784   70163 main.go:141] libmachine: (newest-cni-037368) Reserved static IP address: 192.168.39.64
	I0404 23:14:54.882792   70163 main.go:141] libmachine: (newest-cni-037368) Waiting for SSH to be available...
	I0404 23:14:54.885827   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:54.886262   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:54.886293   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:54.886457   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Using SSH client type: external
	I0404 23:14:54.886488   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa (-rw-------)
	I0404 23:14:54.886518   70163 main.go:141] libmachine: (newest-cni-037368) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 23:14:54.886542   70163 main.go:141] libmachine: (newest-cni-037368) DBG | About to run SSH command:
	I0404 23:14:54.886582   70163 main.go:141] libmachine: (newest-cni-037368) DBG | exit 0
	I0404 23:14:55.016494   70163 main.go:141] libmachine: (newest-cni-037368) DBG | SSH cmd err, output: <nil>: 
	I0404 23:14:55.016838   70163 main.go:141] libmachine: (newest-cni-037368) KVM machine creation complete!
	I0404 23:14:55.017103   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetConfigRaw
	I0404 23:14:55.017646   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:55.017828   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:55.017983   70163 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0404 23:14:55.018000   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetState
	I0404 23:14:55.019243   70163 main.go:141] libmachine: Detecting operating system of created instance...
	I0404 23:14:55.019265   70163 main.go:141] libmachine: Waiting for SSH to be available...
	I0404 23:14:55.019270   70163 main.go:141] libmachine: Getting to WaitForSSH function...
	I0404 23:14:55.019276   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.022048   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.022483   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.022531   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.022695   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.022992   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.023182   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.023332   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.023514   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:55.023753   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:55.023767   70163 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0404 23:14:55.135702   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 23:14:55.135725   70163 main.go:141] libmachine: Detecting the provisioner...
	I0404 23:14:55.135734   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.138712   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.139086   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.139118   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.139235   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.139435   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.139606   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.139822   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.139995   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:55.140180   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:55.140193   70163 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0404 23:14:55.257110   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0404 23:14:55.257177   70163 main.go:141] libmachine: found compatible host: buildroot
	I0404 23:14:55.257187   70163 main.go:141] libmachine: Provisioning with buildroot...
	I0404 23:14:55.257194   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetMachineName
	I0404 23:14:55.257432   70163 buildroot.go:166] provisioning hostname "newest-cni-037368"
	I0404 23:14:55.257461   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetMachineName
	I0404 23:14:55.257641   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.260215   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.260627   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.260664   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.260774   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.260980   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.261144   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.261306   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.261498   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:55.261655   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:55.261667   70163 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-037368 && echo "newest-cni-037368" | sudo tee /etc/hostname
	I0404 23:14:55.391381   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-037368
	
	I0404 23:14:55.391418   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.394181   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.394660   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.394687   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.394868   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.395063   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.395246   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.395410   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.395565   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:55.395790   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:55.395816   70163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-037368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-037368/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-037368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 23:14:55.521589   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 23:14:55.521619   70163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 23:14:55.521642   70163 buildroot.go:174] setting up certificates
	I0404 23:14:55.521654   70163 provision.go:84] configureAuth start
	I0404 23:14:55.521662   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetMachineName
	I0404 23:14:55.521907   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetIP
	I0404 23:14:55.525229   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.525626   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.525656   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.525836   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.528178   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.528566   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.528605   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.528757   70163 provision.go:143] copyHostCerts
	I0404 23:14:55.528819   70163 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 23:14:55.528836   70163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 23:14:55.528907   70163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 23:14:55.529005   70163 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 23:14:55.529013   70163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 23:14:55.529038   70163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 23:14:55.529107   70163 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 23:14:55.529117   70163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 23:14:55.529143   70163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 23:14:55.529204   70163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.newest-cni-037368 san=[127.0.0.1 192.168.39.64 localhost minikube newest-cni-037368]
	I0404 23:14:55.707049   70163 provision.go:177] copyRemoteCerts
	I0404 23:14:55.707107   70163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 23:14:55.707130   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.710328   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.710667   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.710706   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.710878   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.711051   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.711255   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.711439   70163 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa Username:docker}
	I0404 23:14:55.799002   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 23:14:55.827697   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 23:14:55.855267   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 23:14:55.881262   70163 provision.go:87] duration metric: took 359.595283ms to configureAuth
	I0404 23:14:55.881294   70163 buildroot.go:189] setting minikube options for container-runtime
	I0404 23:14:55.881463   70163 config.go:182] Loaded profile config "newest-cni-037368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 23:14:55.881535   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:55.884854   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.885339   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:55.885373   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:55.885591   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:55.885780   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.885997   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:55.886189   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:55.886421   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:55.886584   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:55.886599   70163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 23:14:56.186143   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 23:14:56.186172   70163 main.go:141] libmachine: Checking connection to Docker...
	I0404 23:14:56.186180   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetURL
	I0404 23:14:56.187536   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Using libvirt version 6000000
	I0404 23:14:56.190066   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.190592   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.190622   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.190726   70163 main.go:141] libmachine: Docker is up and running!
	I0404 23:14:56.190743   70163 main.go:141] libmachine: Reticulating splines...
	I0404 23:14:56.190751   70163 client.go:171] duration metric: took 25.215457089s to LocalClient.Create
	I0404 23:14:56.190776   70163 start.go:167] duration metric: took 25.215523166s to libmachine.API.Create "newest-cni-037368"
	I0404 23:14:56.190790   70163 start.go:293] postStartSetup for "newest-cni-037368" (driver="kvm2")
	I0404 23:14:56.190806   70163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 23:14:56.190826   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:56.191098   70163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 23:14:56.191122   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:56.193293   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.193745   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.193785   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.193928   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:56.194107   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:56.194283   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:56.194445   70163 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa Username:docker}
	I0404 23:14:56.279256   70163 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 23:14:56.283846   70163 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 23:14:56.283868   70163 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 23:14:56.283948   70163 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 23:14:56.284069   70163 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 23:14:56.284227   70163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 23:14:56.294148   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 23:14:56.325140   70163 start.go:296] duration metric: took 134.336794ms for postStartSetup
	I0404 23:14:56.325185   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetConfigRaw
	I0404 23:14:56.325814   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetIP
	I0404 23:14:56.328525   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.328893   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.328943   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.329159   70163 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json ...
	I0404 23:14:56.329374   70163 start.go:128] duration metric: took 25.374266565s to createHost
	I0404 23:14:56.329402   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:56.331984   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.332537   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.332570   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.332794   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:56.333116   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:56.333322   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:56.333529   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:56.333756   70163 main.go:141] libmachine: Using SSH client type: native
	I0404 23:14:56.333990   70163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0404 23:14:56.334018   70163 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 23:14:56.449114   70163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712272496.434421565
	
	I0404 23:14:56.449143   70163 fix.go:216] guest clock: 1712272496.434421565
	I0404 23:14:56.449149   70163 fix.go:229] Guest: 2024-04-04 23:14:56.434421565 +0000 UTC Remote: 2024-04-04 23:14:56.329388952 +0000 UTC m=+25.509019356 (delta=105.032613ms)
	I0404 23:14:56.449167   70163 fix.go:200] guest clock delta is within tolerance: 105.032613ms
	I0404 23:14:56.449171   70163 start.go:83] releasing machines lock for "newest-cni-037368", held for 25.494173727s
	I0404 23:14:56.449189   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:56.449490   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetIP
	I0404 23:14:56.452270   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.452641   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.452668   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.452816   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:56.453393   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:56.453592   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:56.453668   70163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 23:14:56.453716   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:56.453837   70163 ssh_runner.go:195] Run: cat /version.json
	I0404 23:14:56.453873   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHHostname
	I0404 23:14:56.456613   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.456951   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.456980   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.456998   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.457143   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:56.457342   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:56.457377   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:56.457407   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:56.457497   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:56.457645   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHPort
	I0404 23:14:56.457641   70163 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa Username:docker}
	I0404 23:14:56.457804   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHKeyPath
	I0404 23:14:56.457972   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetSSHUsername
	I0404 23:14:56.458149   70163 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa Username:docker}
	I0404 23:14:56.537936   70163 ssh_runner.go:195] Run: systemctl --version
	I0404 23:14:56.578037   70163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 23:14:56.742846   70163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 23:14:56.749138   70163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 23:14:56.749210   70163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 23:14:56.765535   70163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 23:14:56.765559   70163 start.go:494] detecting cgroup driver to use...
	I0404 23:14:56.765626   70163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 23:14:56.782715   70163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 23:14:56.796631   70163 docker.go:217] disabling cri-docker service (if available) ...
	I0404 23:14:56.796680   70163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 23:14:56.810619   70163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 23:14:56.824567   70163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 23:14:56.948907   70163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 23:14:57.120078   70163 docker.go:233] disabling docker service ...
	I0404 23:14:57.120169   70163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 23:14:57.135138   70163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 23:14:57.149128   70163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 23:14:57.270558   70163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 23:14:57.380808   70163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 23:14:57.394867   70163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 23:14:57.414478   70163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 23:14:57.414541   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.425180   70163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 23:14:57.425239   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.435632   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.446875   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.457630   70163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 23:14:57.468609   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.479282   70163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.497643   70163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 23:14:57.507965   70163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 23:14:57.517395   70163 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 23:14:57.517446   70163 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 23:14:57.531342   70163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 23:14:57.542615   70163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:14:57.672446   70163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 23:14:57.823656   70163 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 23:14:57.823729   70163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 23:14:57.829069   70163 start.go:562] Will wait 60s for crictl version
	I0404 23:14:57.829118   70163 ssh_runner.go:195] Run: which crictl
	I0404 23:14:57.833670   70163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 23:14:57.872839   70163 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 23:14:57.872922   70163 ssh_runner.go:195] Run: crio --version
	I0404 23:14:57.904425   70163 ssh_runner.go:195] Run: crio --version
	I0404 23:14:57.935106   70163 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 23:14:57.936448   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetIP
	I0404 23:14:57.939559   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:57.939920   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:28:2c", ip: ""} in network mk-newest-cni-037368: {Iface:virbr1 ExpiryTime:2024-04-05 00:14:46 +0000 UTC Type:0 Mac:52:54:00:28:28:2c Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:newest-cni-037368 Clientid:01:52:54:00:28:28:2c}
	I0404 23:14:57.939963   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined IP address 192.168.39.64 and MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:57.940168   70163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 23:14:57.944509   70163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 23:14:57.960440   70163 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0404 23:14:57.961777   70163 kubeadm.go:877] updating cluster {Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 23:14:57.961902   70163 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 23:14:57.961975   70163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 23:14:58.000175   70163 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 23:14:58.000248   70163 ssh_runner.go:195] Run: which lz4
	I0404 23:14:58.004646   70163 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 23:14:58.009072   70163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 23:14:58.009100   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394409945 bytes)
	I0404 23:14:59.553469   70163 crio.go:462] duration metric: took 1.548878005s to copy over tarball
	I0404 23:14:59.553563   70163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 23:15:01.827605   70163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273991334s)
	I0404 23:15:01.827639   70163 crio.go:469] duration metric: took 2.274144342s to extract the tarball
	I0404 23:15:01.827649   70163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 23:15:01.866091   70163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 23:15:01.913602   70163 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 23:15:01.913625   70163 cache_images.go:84] Images are preloaded, skipping loading
	I0404 23:15:01.913636   70163 kubeadm.go:928] updating node { 192.168.39.64 8443 v1.30.0-rc.0 crio true true} ...
	I0404 23:15:01.913755   70163 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-037368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 23:15:01.913844   70163 ssh_runner.go:195] Run: crio config
	I0404 23:15:01.960737   70163 cni.go:84] Creating CNI manager for ""
	I0404 23:15:01.960760   70163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:15:01.960773   70163 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0404 23:15:01.960797   70163 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-037368 NodeName:newest-cni-037368 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 23:15:01.960917   70163 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-037368"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 23:15:01.960990   70163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 23:15:01.971686   70163 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 23:15:01.971764   70163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 23:15:01.981530   70163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0404 23:15:01.999344   70163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 23:15:02.016955   70163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0404 23:15:02.033742   70163 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0404 23:15:02.037737   70163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 23:15:02.052767   70163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:15:02.195159   70163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:15:02.215631   70163 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368 for IP: 192.168.39.64
	I0404 23:15:02.215659   70163 certs.go:194] generating shared ca certs ...
	I0404 23:15:02.215681   70163 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.215878   70163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 23:15:02.215952   70163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 23:15:02.215964   70163 certs.go:256] generating profile certs ...
	I0404 23:15:02.216031   70163 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.key
	I0404 23:15:02.216050   70163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.crt with IP's: []
	I0404 23:15:02.358486   70163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.crt ...
	I0404 23:15:02.358516   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.crt: {Name:mk4def73756e804706705f4952d02a51c2467d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.358678   70163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.key ...
	I0404 23:15:02.358689   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/client.key: {Name:mk793ee02c39f4282f334a5c59b74a1f62127ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.358765   70163 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key.c3e276dc
	I0404 23:15:02.358779   70163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt.c3e276dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.64]
	I0404 23:15:02.509103   70163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt.c3e276dc ...
	I0404 23:15:02.509130   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt.c3e276dc: {Name:mkd0f101e7efdc657ae12e8b0572596d9b49a442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.509281   70163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key.c3e276dc ...
	I0404 23:15:02.509295   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key.c3e276dc: {Name:mk4ed4cb06406d17f361549e98b403c058fc2fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.509363   70163 certs.go:381] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt.c3e276dc -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt
	I0404 23:15:02.509443   70163 certs.go:385] copying /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key.c3e276dc -> /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key
	I0404 23:15:02.509501   70163 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.key
	I0404 23:15:02.509516   70163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.crt with IP's: []
	I0404 23:15:02.730285   70163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.crt ...
	I0404 23:15:02.730320   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.crt: {Name:mk53597b03ff9e6d2409b56bb01dc0b13c6bf1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.730492   70163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.key ...
	I0404 23:15:02.730508   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.key: {Name:mk15fb60921f26844ae19ee1db48707bb52e3186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:15:02.730700   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 23:15:02.730743   70163 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 23:15:02.730754   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 23:15:02.730778   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 23:15:02.730802   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 23:15:02.730827   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 23:15:02.730878   70163 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 23:15:02.731512   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 23:15:02.762920   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 23:15:02.791724   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 23:15:02.818747   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 23:15:02.846433   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 23:15:02.874212   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 23:15:02.903818   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 23:15:02.933457   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 23:15:02.960284   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 23:15:02.994857   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 23:15:03.031633   70163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 23:15:03.067339   70163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 23:15:03.098530   70163 ssh_runner.go:195] Run: openssl version
	I0404 23:15:03.105327   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 23:15:03.119016   70163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 23:15:03.124068   70163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 23:15:03.124157   70163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 23:15:03.130858   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 23:15:03.143033   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 23:15:03.155082   70163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 23:15:03.160041   70163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 23:15:03.160097   70163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 23:15:03.166278   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 23:15:03.178399   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 23:15:03.189659   70163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 23:15:03.194625   70163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 23:15:03.194687   70163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 23:15:03.200999   70163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 23:15:03.212751   70163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 23:15:03.217209   70163 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0404 23:15:03.217264   70163 kubeadm.go:391] StartCluster: {Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 23:15:03.217357   70163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 23:15:03.217409   70163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 23:15:03.257370   70163 cri.go:89] found id: ""
	I0404 23:15:03.257443   70163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0404 23:15:03.268418   70163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:15:03.278954   70163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:15:03.289510   70163 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:15:03.289527   70163 kubeadm.go:156] found existing configuration files:
	
	I0404 23:15:03.289576   70163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:15:03.299262   70163 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:15:03.299318   70163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:15:03.309042   70163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:15:03.319204   70163 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:15:03.319266   70163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:15:03.329510   70163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:15:03.339315   70163 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:15:03.339381   70163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:15:03.349280   70163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:15:03.358888   70163 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:15:03.358946   70163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:15:03.368829   70163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:15:03.606504   70163 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.324647599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272509324614082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31fa1aff-ca77-4e74-b2ce-37030198828a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.325539108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=377f96e4-0895-4bf7-be23-5b846b7b728d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.325626611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=377f96e4-0895-4bf7-be23-5b846b7b728d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.325866614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=377f96e4-0895-4bf7-be23-5b846b7b728d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.369320308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3daf441a-612f-4ad0-b8bd-daba6610f687 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.369488344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3daf441a-612f-4ad0-b8bd-daba6610f687 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.371084636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e19e5ddb-4038-4baf-aa6e-fb189d543702 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.371755111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272509371726393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e19e5ddb-4038-4baf-aa6e-fb189d543702 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.372435570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f039a35-1eb5-45c1-bfcf-b2b9407bd91d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.372489754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f039a35-1eb5-45c1-bfcf-b2b9407bd91d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.372734873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f039a35-1eb5-45c1-bfcf-b2b9407bd91d name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.415733211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6ed49db-2e5c-4e04-9599-eeb1e0980a35 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.415945505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6ed49db-2e5c-4e04-9599-eeb1e0980a35 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.417268638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66e9ddb5-7c56-4e94-91fc-f17d3314a7d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.417848853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272509417823462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66e9ddb5-7c56-4e94-91fc-f17d3314a7d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.418431735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c45001ec-bb34-400b-bf03-144ea6f9c23b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.418482931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c45001ec-bb34-400b-bf03-144ea6f9c23b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.418729817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c45001ec-bb34-400b-bf03-144ea6f9c23b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.461579282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85a810a1-df24-437d-aed7-ea17b63136ac name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.461655235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85a810a1-df24-437d-aed7-ea17b63136ac name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.463487064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b114561-12ee-4d06-8c79-d00e0b996640 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.464138955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272509464110377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b114561-12ee-4d06-8c79-d00e0b996640 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.464769400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc1ca9d2-7455-4a91-b0e0-6b5ce256efa0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.464819854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc1ca9d2-7455-4a91-b0e0-6b5ce256efa0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:09 embed-certs-143118 crio[727]: time="2024-04-04 23:15:09.465031716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271319944324519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6f64a5cb6eba494c5bacaa8a5881ccbf4bd9df021855e20a79ea9f9c38cef1,PodSandboxId:63f396ce437fa882b6204ac6868e442877bddc1ce19d6535769e174a0ff03820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271299313946588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25,},Annotations:map[string]string{io.kubernetes.container.hash: 5aabad23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429,PodSandboxId:746efa3d6e4565cbb3f3dc4e74787f82b633b20b658eaab609b9585ef1beef88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271296886064386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9qh9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adbc1cb-cb87-4593-a183-a9a14cb8ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: f724136b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a,PodSandboxId:261fceb686accbe0a65926a384ace16b4c88f9597fcea7956755ab3ffa1dba3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271289115513589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3faa390d-3660-4f7d-a20c-e36ee00f2863,},Annotations:map[string]string{io.kubernetes.container.hash: 3323d75b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664,PodSandboxId:b8dda25455029ece6cd6ec97d5b707bcb7ed93e4493d47c3e01e5c63c963ab94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271289097805841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-psst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e8cdd-06fb-454a-97a2-7b0764ed0
a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 79368bd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c,PodSandboxId:816c2f4344e1332cf43d7411bdb6f2db971d48600eea9f7dbcb6906832bcbdf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271284447319322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74178d8315350a9cb5b02bd98c690be0,},Annotations:map[string]string{io.kub
ernetes.container.hash: ce79f91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef,PodSandboxId:e3d209d7c560b01535e2f1a4a0ff6613e9793f632147f3000e3d5e72cb15e553,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271284414680058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d13c82049393c6fce9c505b93bdbf112,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3baf2b20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88,PodSandboxId:27b7cde0b4274b4977193c152f717e9205a9f1637bc8e8910d35ee84b6672e3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271284422829216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e47b5206312f0c206cc4d3830884e5,},Annotations:map[string]string{io.kubernetes.container.hash:
be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b,PodSandboxId:adce183fc81d71b05857ec0a6e4609cd83d6a392385fd9712104a5abdb2f41fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271284364722390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-143118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfa7b30e7cec5445fb29a26ca742f03,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc1ca9d2-7455-4a91-b0e0-6b5ce256efa0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	634138d6bde20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   261fceb686acc       storage-provisioner
	eb6f64a5cb6eb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   63f396ce437fa       busybox
	712b227f7cfb0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   746efa3d6e456       coredns-76f75df574-9qh9s
	6c047a719f155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   261fceb686acc       storage-provisioner
	27fc077394a7d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      20 minutes ago      Running             kube-proxy                1                   b8dda25455029       kube-proxy-psst7
	ecdd813ae02e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   816c2f4344e13       etcd-embed-certs-143118
	46137dbe2189d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      20 minutes ago      Running             kube-scheduler            1                   27b7cde0b4274       kube-scheduler-embed-certs-143118
	31cb759c8e7bc       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      20 minutes ago      Running             kube-apiserver            1                   e3d209d7c560b       kube-apiserver-embed-certs-143118
	58b9430fea2e8       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      20 minutes ago      Running             kube-controller-manager   1                   adce183fc81d7       kube-controller-manager-embed-certs-143118
	
	
	==> coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39408 - 45167 "HINFO IN 2687519719721392437.3884915358984449562. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058065701s
	
	
	==> describe nodes <==
	Name:               embed-certs-143118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-143118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=embed-certs-143118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_46_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-143118
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:10:36 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:10:36 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:10:36 +0000   Thu, 04 Apr 2024 22:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:10:36 +0000   Thu, 04 Apr 2024 22:54:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.137
	  Hostname:    embed-certs-143118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7148ba4297d4a75bed2c7ff809a89d8
	  System UUID:                a7148ba4-297d-4a75-bed2-c7ff809a89d8
	  Boot ID:                    4f0c6e40-0013-4670-ae75-864aac291198
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-9qh9s                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-143118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-143118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-143118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-psst7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-143118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-xwm4m               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-143118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-143118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-143118 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node embed-certs-143118 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-143118 event: Registered Node embed-certs-143118 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-143118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-143118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-143118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-143118 event: Registered Node embed-certs-143118 in Controller
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052957] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541196] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.830176] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643540] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.437015] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.058374] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059203] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.210488] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.131081] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.318465] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.820921] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.063481] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.398118] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.599380] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.970950] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[  +5.295483] kauditd_printk_skb: 78 callbacks suppressed
	[Apr 4 22:55] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] <==
	{"level":"info","ts":"2024-04-04T22:55:43.595821Z","caller":"traceutil/trace.go:171","msg":"trace[1163610137] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:685; }","duration":"770.502068ms","start":"2024-04-04T22:55:42.825293Z","end":"2024-04-04T22:55:43.595795Z","steps":["trace[1163610137] 'read index received'  (duration: 218.528802ms)","trace[1163610137] 'applied index is now lower than readState.Index'  (duration: 551.972531ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-04T22:55:43.595988Z","caller":"traceutil/trace.go:171","msg":"trace[681328634] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"855.461768ms","start":"2024-04-04T22:55:42.740518Z","end":"2024-04-04T22:55:43.595979Z","steps":["trace[681328634] 'process raft request'  (duration: 603.910856ms)","trace[681328634] 'compare'  (duration: 250.983624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T22:55:43.596066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:42.740507Z","time spent":"855.527348ms","remote":"127.0.0.1:44596","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4268,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" mod_revision:622 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" value_size:4202 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" > >"}
	{"level":"warn","ts":"2024-04-04T22:55:43.596215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"770.921421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-04-04T22:55:43.596256Z","caller":"traceutil/trace.go:171","msg":"trace[221362184] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m; range_end:; response_count:1; response_revision:633; }","duration":"770.981847ms","start":"2024-04-04T22:55:42.825268Z","end":"2024-04-04T22:55:43.59625Z","steps":["trace[221362184] 'agreement among raft nodes before linearized reading'  (duration: 770.916402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.596277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:42.825256Z","time spent":"771.016422ms","remote":"127.0.0.1:44596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4307,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xwm4m\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.596538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"544.018085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f\" ","response":"range_response_count:1 size:784"}
	{"level":"info","ts":"2024-04-04T22:55:43.596654Z","caller":"traceutil/trace.go:171","msg":"trace[2075725438] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f; range_end:; response_count:1; response_revision:633; }","duration":"544.322923ms","start":"2024-04-04T22:55:43.05232Z","end":"2024-04-04T22:55:43.596643Z","steps":["trace[2075725438] 'agreement among raft nodes before linearized reading'  (duration: 544.155919ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.596699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.052305Z","time spent":"544.386891ms","remote":"127.0.0.1:44490","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":808,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xwm4m.17c335abad2dcc3f\" "}
	{"level":"info","ts":"2024-04-04T23:04:46.423175Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-04-04T23:04:46.437809Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"13.819656ms","hash":3509900284,"current-db-size-bytes":2654208,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2654208,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-04-04T23:04:46.437878Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3509900284,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2024-04-04T23:09:46.432324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1083}
	{"level":"info","ts":"2024-04-04T23:09:46.437136Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1083,"took":"4.095774ms","hash":3517074941,"current-db-size-bytes":2654208,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-04T23:09:46.437213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3517074941,"revision":1083,"compact-revision":841}
	{"level":"info","ts":"2024-04-04T23:14:46.44107Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1327}
	{"level":"info","ts":"2024-04-04T23:14:46.446453Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1327,"took":"4.378949ms","hash":1548130336,"current-db-size-bytes":2654208,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1671168,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-04-04T23:14:46.446547Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1548130336,"revision":1327,"compact-revision":1083}
	{"level":"info","ts":"2024-04-04T23:14:49.389129Z","caller":"traceutil/trace.go:171","msg":"trace[1153325252] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"205.283357ms","start":"2024-04-04T23:14:49.183797Z","end":"2024-04-04T23:14:49.389081Z","steps":["trace[1153325252] 'process raft request'  (duration: 196.145323ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T23:15:04.315268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.889555ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9756079559263550777 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-csy76cuzzymnk5gwsbhol34hoy\" mod_revision:1576 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-csy76cuzzymnk5gwsbhol34hoy\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-csy76cuzzymnk5gwsbhol34hoy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-04T23:15:04.315409Z","caller":"traceutil/trace.go:171","msg":"trace[1291770342] transaction","detail":"{read_only:false; response_revision:1586; number_of_response:1; }","duration":"266.682874ms","start":"2024-04-04T23:15:04.048708Z","end":"2024-04-04T23:15:04.315391Z","steps":["trace[1291770342] 'process raft request'  (duration: 136.465507ms)","trace[1291770342] 'compare'  (duration: 129.515969ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T23:15:04.842222Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"364.725506ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9756079559263550778 > lease_revoke:<id:07648eab525a7ce7>","response":"size:29"}
	{"level":"info","ts":"2024-04-04T23:15:04.842451Z","caller":"traceutil/trace.go:171","msg":"trace[916078928] linearizableReadLoop","detail":"{readStateIndex:1880; appliedIndex:1879; }","duration":"152.480005ms","start":"2024-04-04T23:15:04.689958Z","end":"2024-04-04T23:15:04.842438Z","steps":["trace[916078928] 'read index received'  (duration: 24.314µs)","trace[916078928] 'applied index is now lower than readState.Index'  (duration: 152.454501ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T23:15:04.842576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.630532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T23:15:04.8426Z","caller":"traceutil/trace.go:171","msg":"trace[679180489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1586; }","duration":"152.678188ms","start":"2024-04-04T23:15:04.689912Z","end":"2024-04-04T23:15:04.84259Z","steps":["trace[679180489] 'agreement among raft nodes before linearized reading'  (duration: 152.632507ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:15:09 up 20 min,  0 users,  load average: 0.37, 0.16, 0.11
	Linux embed-certs-143118 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] <==
	I0404 23:09:48.816132       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:10:48.815672       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:10:48.815908       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:10:48.815936       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:10:48.816792       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:10:48.816842       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:10:48.818060       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:12:48.816832       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:12:48.816942       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:12:48.816951       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:12:48.819239       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:12:48.819460       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:12:48.819509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:14:47.818450       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:14:47.818786       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:14:48.819649       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:14:48.819804       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:14:48.819842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:14:48.819985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:14:48.820145       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:14:48.821387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] <==
	I0404 23:09:31.280870       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:10:00.768511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:10:01.292255       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:10:30.776031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:10:31.301193       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:11:00.781592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:01.308601       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:11:02.753541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="373.221µs"
	I0404 23:11:13.749692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="154.188µs"
	E0404 23:11:30.788013       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:31.318631       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:12:00.793592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:01.325950       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:12:30.800294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:31.337762       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:00.806025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:01.346894       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:30.811055       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:31.355911       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:00.817716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:01.365543       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:30.825513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:31.375744       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:15:00.833743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:15:01.384483       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] <==
	I0404 22:54:49.301146       1 server_others.go:72] "Using iptables proxy"
	I0404 22:54:49.325662       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.137"]
	I0404 22:54:49.382786       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 22:54:49.382807       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:54:49.382823       1 server_others.go:168] "Using iptables Proxier"
	I0404 22:54:49.385867       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:54:49.386146       1 server.go:865] "Version info" version="v1.29.3"
	I0404 22:54:49.386158       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:54:49.387646       1 config.go:188] "Starting service config controller"
	I0404 22:54:49.387688       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 22:54:49.387707       1 config.go:97] "Starting endpoint slice config controller"
	I0404 22:54:49.387712       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 22:54:49.388074       1 config.go:315] "Starting node config controller"
	I0404 22:54:49.388111       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 22:54:49.490483       1 shared_informer.go:318] Caches are synced for node config
	I0404 22:54:49.490859       1 shared_informer.go:318] Caches are synced for service config
	I0404 22:54:49.490935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] <==
	I0404 22:54:45.703174       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:54:47.728910       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:54:47.728964       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:54:47.728976       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:54:47.728982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:54:47.821082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0404 22:54:47.821132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:54:47.827907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:54:47.828039       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:54:47.828053       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:54:47.828066       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0404 22:54:47.928152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:12:43 embed-certs-143118 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:12:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:12:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:12:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:12:44 embed-certs-143118 kubelet[942]: E0404 23:12:44.732450     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:12:58 embed-certs-143118 kubelet[942]: E0404 23:12:58.732939     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:13:12 embed-certs-143118 kubelet[942]: E0404 23:13:12.733061     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:13:27 embed-certs-143118 kubelet[942]: E0404 23:13:27.732881     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:13:38 embed-certs-143118 kubelet[942]: E0404 23:13:38.733766     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:13:43 embed-certs-143118 kubelet[942]: E0404 23:13:43.758292     942 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:13:43 embed-certs-143118 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:13:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:13:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:13:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:13:50 embed-certs-143118 kubelet[942]: E0404 23:13:50.733037     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:14:05 embed-certs-143118 kubelet[942]: E0404 23:14:05.733519     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:14:20 embed-certs-143118 kubelet[942]: E0404 23:14:20.732954     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:14:35 embed-certs-143118 kubelet[942]: E0404 23:14:35.742225     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:14:43 embed-certs-143118 kubelet[942]: E0404 23:14:43.763207     942 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:14:43 embed-certs-143118 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:14:43 embed-certs-143118 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:14:43 embed-certs-143118 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:14:43 embed-certs-143118 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:14:47 embed-certs-143118 kubelet[942]: E0404 23:14:47.732706     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	Apr 04 23:14:59 embed-certs-143118 kubelet[942]: E0404 23:14:59.741205     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwm4m" podUID="1e43f30f-7be7-4083-8d39-eb482e5127a5"
	
	
	==> storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] <==
	I0404 22:55:20.097608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 22:55:20.115046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 22:55:20.115169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 22:55:37.526898       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 22:55:37.527410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa!
	I0404 22:55:37.529269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2135055d-48db-48ff-a18c-7eb1367f3d59", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa became leader
	I0404 22:55:37.627974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-143118_35f99663-34e0-4ce6-9f9f-5b17186545aa!
	
	
	==> storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] <==
	I0404 22:54:49.264544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0404 22:55:19.269086       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-143118 -n embed-certs-143118
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-143118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xwm4m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m: exit status 1 (78.680088ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xwm4m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-143118 describe pod metrics-server-57f55c9bc5-xwm4m: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (411.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (358.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-024416 -n no-preload-024416
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:14:50.69884849 +0000 UTC m=+6346.561834195
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-024416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-024416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.393µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-024416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-024416 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-024416 logs -n 25: (1.264204883s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:14 UTC |
	| start   | -p newest-cni-037368 --memory=2200 --alsologtostderr   | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 23:14:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 23:14:30.870024   70163 out.go:291] Setting OutFile to fd 1 ...
	I0404 23:14:30.870269   70163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:14:30.870292   70163 out.go:304] Setting ErrFile to fd 2...
	I0404 23:14:30.870306   70163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:14:30.870801   70163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 23:14:30.871498   70163 out.go:298] Setting JSON to false
	I0404 23:14:30.872458   70163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7016,"bootTime":1712265455,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 23:14:30.872535   70163 start.go:139] virtualization: kvm guest
	I0404 23:14:30.875964   70163 out.go:177] * [newest-cni-037368] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 23:14:30.877646   70163 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 23:14:30.877641   70163 notify.go:220] Checking for updates...
	I0404 23:14:30.879382   70163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 23:14:30.881062   70163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:14:30.883864   70163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:30.885655   70163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 23:14:30.887559   70163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 23:14:30.889889   70163 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:14:30.889991   70163 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:14:30.890113   70163 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 23:14:30.890250   70163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 23:14:30.929805   70163 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 23:14:30.931316   70163 start.go:297] selected driver: kvm2
	I0404 23:14:30.931336   70163 start.go:901] validating driver "kvm2" against <nil>
	I0404 23:14:30.931349   70163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 23:14:30.932020   70163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:14:30.932096   70163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 23:14:30.949067   70163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 23:14:30.949143   70163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0404 23:14:30.949181   70163 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0404 23:14:30.949496   70163 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0404 23:14:30.949581   70163 cni.go:84] Creating CNI manager for ""
	I0404 23:14:30.949600   70163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:14:30.949617   70163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 23:14:30.949700   70163 start.go:340] cluster config:
	{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 23:14:30.949819   70163 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:14:30.952761   70163 out.go:177] * Starting "newest-cni-037368" primary control-plane node in "newest-cni-037368" cluster
	I0404 23:14:30.954443   70163 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 23:14:30.954511   70163 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0404 23:14:30.954523   70163 cache.go:56] Caching tarball of preloaded images
	I0404 23:14:30.954615   70163 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 23:14:30.954631   70163 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0404 23:14:30.954760   70163 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json ...
	I0404 23:14:30.954791   70163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json: {Name:mkdf5e70da216e38ff3343882e17305528e61904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:14:30.954952   70163 start.go:360] acquireMachinesLock for newest-cni-037368: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 23:14:30.954987   70163 start.go:364] duration metric: took 19.143µs to acquireMachinesLock for "newest-cni-037368"
	I0404 23:14:30.955011   70163 start.go:93] Provisioning new machine with config: &{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:14:30.955096   70163 start.go:125] createHost starting for "" (driver="kvm2")
	I0404 23:14:30.956983   70163 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0404 23:14:30.957155   70163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:14:30.957292   70163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:14:30.973171   70163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0404 23:14:30.973710   70163 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:14:30.974232   70163 main.go:141] libmachine: Using API Version  1
	I0404 23:14:30.974251   70163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:14:30.974654   70163 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:14:30.974912   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetMachineName
	I0404 23:14:30.975088   70163 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:14:30.975254   70163 start.go:159] libmachine.API.Create for "newest-cni-037368" (driver="kvm2")
	I0404 23:14:30.975284   70163 client.go:168] LocalClient.Create starting
	I0404 23:14:30.975318   70163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem
	I0404 23:14:30.975355   70163 main.go:141] libmachine: Decoding PEM data...
	I0404 23:14:30.975370   70163 main.go:141] libmachine: Parsing certificate...
	I0404 23:14:30.975419   70163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem
	I0404 23:14:30.975446   70163 main.go:141] libmachine: Decoding PEM data...
	I0404 23:14:30.975460   70163 main.go:141] libmachine: Parsing certificate...
	I0404 23:14:30.975475   70163 main.go:141] libmachine: Running pre-create checks...
	I0404 23:14:30.975490   70163 main.go:141] libmachine: (newest-cni-037368) Calling .PreCreateCheck
	I0404 23:14:30.975793   70163 main.go:141] libmachine: (newest-cni-037368) Calling .GetConfigRaw
	I0404 23:14:30.976276   70163 main.go:141] libmachine: Creating machine...
	I0404 23:14:30.976292   70163 main.go:141] libmachine: (newest-cni-037368) Calling .Create
	I0404 23:14:30.976409   70163 main.go:141] libmachine: (newest-cni-037368) Creating KVM machine...
	I0404 23:14:30.977746   70163 main.go:141] libmachine: (newest-cni-037368) DBG | found existing default KVM network
	I0404 23:14:30.979255   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:30.979102   70186 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0404 23:14:30.979302   70163 main.go:141] libmachine: (newest-cni-037368) DBG | created network xml: 
	I0404 23:14:30.979317   70163 main.go:141] libmachine: (newest-cni-037368) DBG | <network>
	I0404 23:14:30.979325   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <name>mk-newest-cni-037368</name>
	I0404 23:14:30.979348   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <dns enable='no'/>
	I0404 23:14:30.979357   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   
	I0404 23:14:30.979371   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0404 23:14:30.979383   70163 main.go:141] libmachine: (newest-cni-037368) DBG |     <dhcp>
	I0404 23:14:30.979406   70163 main.go:141] libmachine: (newest-cni-037368) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0404 23:14:30.979416   70163 main.go:141] libmachine: (newest-cni-037368) DBG |     </dhcp>
	I0404 23:14:30.979430   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   </ip>
	I0404 23:14:30.979444   70163 main.go:141] libmachine: (newest-cni-037368) DBG |   
	I0404 23:14:30.979458   70163 main.go:141] libmachine: (newest-cni-037368) DBG | </network>
	I0404 23:14:30.979468   70163 main.go:141] libmachine: (newest-cni-037368) DBG | 
	I0404 23:14:30.985185   70163 main.go:141] libmachine: (newest-cni-037368) DBG | trying to create private KVM network mk-newest-cni-037368 192.168.39.0/24...
	I0404 23:14:31.060874   70163 main.go:141] libmachine: (newest-cni-037368) DBG | private KVM network mk-newest-cni-037368 192.168.39.0/24 created
	I0404 23:14:31.060910   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.060790   70186 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:31.060922   70163 main.go:141] libmachine: (newest-cni-037368) Setting up store path in /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 ...
	I0404 23:14:31.060971   70163 main.go:141] libmachine: (newest-cni-037368) Building disk image from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 23:14:31.061011   70163 main.go:141] libmachine: (newest-cni-037368) Downloading /home/jenkins/minikube-integration/16143-5297/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0404 23:14:31.285537   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.285381   70186 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/id_rsa...
	I0404 23:14:31.524567   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.524404   70186 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/newest-cni-037368.rawdisk...
	I0404 23:14:31.524597   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Writing magic tar header
	I0404 23:14:31.524615   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Writing SSH key tar header
	I0404 23:14:31.524625   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:31.524541   70186 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 ...
	I0404 23:14:31.524685   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368
	I0404 23:14:31.524714   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube/machines
	I0404 23:14:31.524730   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:14:31.524747   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368 (perms=drwx------)
	I0404 23:14:31.524761   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16143-5297
	I0404 23:14:31.524775   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0404 23:14:31.524788   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home/jenkins
	I0404 23:14:31.524803   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Checking permissions on dir: /home
	I0404 23:14:31.524814   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube/machines (perms=drwxr-xr-x)
	I0404 23:14:31.524823   70163 main.go:141] libmachine: (newest-cni-037368) DBG | Skipping /home - not owner
	I0404 23:14:31.524838   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297/.minikube (perms=drwxr-xr-x)
	I0404 23:14:31.524857   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration/16143-5297 (perms=drwxrwxr-x)
	I0404 23:14:31.524878   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0404 23:14:31.524893   70163 main.go:141] libmachine: (newest-cni-037368) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0404 23:14:31.524909   70163 main.go:141] libmachine: (newest-cni-037368) Creating domain...
	I0404 23:14:31.526135   70163 main.go:141] libmachine: (newest-cni-037368) define libvirt domain using xml: 
	I0404 23:14:31.526169   70163 main.go:141] libmachine: (newest-cni-037368) <domain type='kvm'>
	I0404 23:14:31.526176   70163 main.go:141] libmachine: (newest-cni-037368)   <name>newest-cni-037368</name>
	I0404 23:14:31.526181   70163 main.go:141] libmachine: (newest-cni-037368)   <memory unit='MiB'>2200</memory>
	I0404 23:14:31.526187   70163 main.go:141] libmachine: (newest-cni-037368)   <vcpu>2</vcpu>
	I0404 23:14:31.526199   70163 main.go:141] libmachine: (newest-cni-037368)   <features>
	I0404 23:14:31.526207   70163 main.go:141] libmachine: (newest-cni-037368)     <acpi/>
	I0404 23:14:31.526215   70163 main.go:141] libmachine: (newest-cni-037368)     <apic/>
	I0404 23:14:31.526249   70163 main.go:141] libmachine: (newest-cni-037368)     <pae/>
	I0404 23:14:31.526275   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526294   70163 main.go:141] libmachine: (newest-cni-037368)   </features>
	I0404 23:14:31.526310   70163 main.go:141] libmachine: (newest-cni-037368)   <cpu mode='host-passthrough'>
	I0404 23:14:31.526335   70163 main.go:141] libmachine: (newest-cni-037368)   
	I0404 23:14:31.526347   70163 main.go:141] libmachine: (newest-cni-037368)   </cpu>
	I0404 23:14:31.526356   70163 main.go:141] libmachine: (newest-cni-037368)   <os>
	I0404 23:14:31.526363   70163 main.go:141] libmachine: (newest-cni-037368)     <type>hvm</type>
	I0404 23:14:31.526386   70163 main.go:141] libmachine: (newest-cni-037368)     <boot dev='cdrom'/>
	I0404 23:14:31.526401   70163 main.go:141] libmachine: (newest-cni-037368)     <boot dev='hd'/>
	I0404 23:14:31.526414   70163 main.go:141] libmachine: (newest-cni-037368)     <bootmenu enable='no'/>
	I0404 23:14:31.526421   70163 main.go:141] libmachine: (newest-cni-037368)   </os>
	I0404 23:14:31.526433   70163 main.go:141] libmachine: (newest-cni-037368)   <devices>
	I0404 23:14:31.526443   70163 main.go:141] libmachine: (newest-cni-037368)     <disk type='file' device='cdrom'>
	I0404 23:14:31.526459   70163 main.go:141] libmachine: (newest-cni-037368)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/boot2docker.iso'/>
	I0404 23:14:31.526468   70163 main.go:141] libmachine: (newest-cni-037368)       <target dev='hdc' bus='scsi'/>
	I0404 23:14:31.526489   70163 main.go:141] libmachine: (newest-cni-037368)       <readonly/>
	I0404 23:14:31.526510   70163 main.go:141] libmachine: (newest-cni-037368)     </disk>
	I0404 23:14:31.526520   70163 main.go:141] libmachine: (newest-cni-037368)     <disk type='file' device='disk'>
	I0404 23:14:31.526537   70163 main.go:141] libmachine: (newest-cni-037368)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0404 23:14:31.526556   70163 main.go:141] libmachine: (newest-cni-037368)       <source file='/home/jenkins/minikube-integration/16143-5297/.minikube/machines/newest-cni-037368/newest-cni-037368.rawdisk'/>
	I0404 23:14:31.526568   70163 main.go:141] libmachine: (newest-cni-037368)       <target dev='hda' bus='virtio'/>
	I0404 23:14:31.526578   70163 main.go:141] libmachine: (newest-cni-037368)     </disk>
	I0404 23:14:31.526591   70163 main.go:141] libmachine: (newest-cni-037368)     <interface type='network'>
	I0404 23:14:31.526605   70163 main.go:141] libmachine: (newest-cni-037368)       <source network='mk-newest-cni-037368'/>
	I0404 23:14:31.526621   70163 main.go:141] libmachine: (newest-cni-037368)       <model type='virtio'/>
	I0404 23:14:31.526634   70163 main.go:141] libmachine: (newest-cni-037368)     </interface>
	I0404 23:14:31.526655   70163 main.go:141] libmachine: (newest-cni-037368)     <interface type='network'>
	I0404 23:14:31.526669   70163 main.go:141] libmachine: (newest-cni-037368)       <source network='default'/>
	I0404 23:14:31.526690   70163 main.go:141] libmachine: (newest-cni-037368)       <model type='virtio'/>
	I0404 23:14:31.526706   70163 main.go:141] libmachine: (newest-cni-037368)     </interface>
	I0404 23:14:31.526720   70163 main.go:141] libmachine: (newest-cni-037368)     <serial type='pty'>
	I0404 23:14:31.526729   70163 main.go:141] libmachine: (newest-cni-037368)       <target port='0'/>
	I0404 23:14:31.526735   70163 main.go:141] libmachine: (newest-cni-037368)     </serial>
	I0404 23:14:31.526742   70163 main.go:141] libmachine: (newest-cni-037368)     <console type='pty'>
	I0404 23:14:31.526747   70163 main.go:141] libmachine: (newest-cni-037368)       <target type='serial' port='0'/>
	I0404 23:14:31.526751   70163 main.go:141] libmachine: (newest-cni-037368)     </console>
	I0404 23:14:31.526757   70163 main.go:141] libmachine: (newest-cni-037368)     <rng model='virtio'>
	I0404 23:14:31.526763   70163 main.go:141] libmachine: (newest-cni-037368)       <backend model='random'>/dev/random</backend>
	I0404 23:14:31.526771   70163 main.go:141] libmachine: (newest-cni-037368)     </rng>
	I0404 23:14:31.526777   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526793   70163 main.go:141] libmachine: (newest-cni-037368)     
	I0404 23:14:31.526806   70163 main.go:141] libmachine: (newest-cni-037368)   </devices>
	I0404 23:14:31.526815   70163 main.go:141] libmachine: (newest-cni-037368) </domain>
	I0404 23:14:31.526821   70163 main.go:141] libmachine: (newest-cni-037368) 
	I0404 23:14:31.530977   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:23:3c:1f in network default
	I0404 23:14:31.531540   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring networks are active...
	I0404 23:14:31.531562   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:31.532384   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring network default is active
	I0404 23:14:31.532762   70163 main.go:141] libmachine: (newest-cni-037368) Ensuring network mk-newest-cni-037368 is active
	I0404 23:14:31.533579   70163 main.go:141] libmachine: (newest-cni-037368) Getting domain xml...
	I0404 23:14:31.534419   70163 main.go:141] libmachine: (newest-cni-037368) Creating domain...
	I0404 23:14:32.809020   70163 main.go:141] libmachine: (newest-cni-037368) Waiting to get IP...
	I0404 23:14:32.810157   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:32.810614   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:32.810645   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:32.810575   70186 retry.go:31] will retry after 210.655771ms: waiting for machine to come up
	I0404 23:14:33.023279   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.023794   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.023820   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.023743   70186 retry.go:31] will retry after 260.218627ms: waiting for machine to come up
	I0404 23:14:33.285561   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.286096   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.286135   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.286054   70186 retry.go:31] will retry after 463.837334ms: waiting for machine to come up
	I0404 23:14:33.751872   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:33.752323   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:33.752355   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:33.752269   70186 retry.go:31] will retry after 449.398418ms: waiting for machine to come up
	I0404 23:14:34.202847   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:34.203383   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:34.203411   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:34.203335   70186 retry.go:31] will retry after 648.432709ms: waiting for machine to come up
	I0404 23:14:34.853129   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:34.853654   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:34.853686   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:34.853591   70186 retry.go:31] will retry after 646.996164ms: waiting for machine to come up
	I0404 23:14:35.502350   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:35.502916   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:35.502961   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:35.502870   70186 retry.go:31] will retry after 756.555637ms: waiting for machine to come up
	I0404 23:14:36.261479   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:36.261922   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:36.261948   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:36.261875   70186 retry.go:31] will retry after 1.321472833s: waiting for machine to come up
	I0404 23:14:37.585353   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:37.585877   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:37.585908   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:37.585782   70186 retry.go:31] will retry after 1.172339634s: waiting for machine to come up
	I0404 23:14:38.760166   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:38.760836   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:38.760870   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:38.760738   70186 retry.go:31] will retry after 2.322289196s: waiting for machine to come up
	I0404 23:14:41.084284   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:41.084881   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:41.084910   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:41.084812   70186 retry.go:31] will retry after 1.965671087s: waiting for machine to come up
	I0404 23:14:43.052166   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:43.052749   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:43.052783   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:43.052681   70186 retry.go:31] will retry after 2.736599487s: waiting for machine to come up
	I0404 23:14:45.790853   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:45.791272   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:45.791299   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:45.791226   70186 retry.go:31] will retry after 3.579403426s: waiting for machine to come up
	I0404 23:14:49.372474   70163 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:14:49.373011   70163 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:14:49.373035   70163 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:14:49.372958   70186 retry.go:31] will retry after 5.429595005s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.381975595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4d70154-b72f-4a09-b241-41981ef747a8 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.383551921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0df690a2-e4bb-4729-81eb-d6a062be3edf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.383913010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272491383892569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0df690a2-e4bb-4729-81eb-d6a062be3edf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.384576987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ece114b-56a3-408c-a752-304bd44e5183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.384653206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ece114b-56a3-408c-a752-304bd44e5183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.385004041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ece114b-56a3-408c-a752-304bd44e5183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.427857887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d32dfd8-e601-4c2c-b09f-7d3441937de9 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.427960170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d32dfd8-e601-4c2c-b09f-7d3441937de9 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.436397070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e489e03-8123-4176-982d-7cb52df365dd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.436824631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272491436795938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e489e03-8123-4176-982d-7cb52df365dd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.437341124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e5fac7d-5305-49f0-ad6b-b0f9f4cd287b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.437426320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e5fac7d-5305-49f0-ad6b-b0f9f4cd287b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.438576112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e5fac7d-5305-49f0-ad6b-b0f9f4cd287b name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.484192183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecb88654-af74-4f5e-9792-26678e1fcc87 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.484323076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecb88654-af74-4f5e-9792-26678e1fcc87 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.485552262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6889f141-ad65-4b97-a12f-099bca252a6f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.485937844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272491485913646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6889f141-ad65-4b97-a12f-099bca252a6f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.486592114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c3e2b3b-2ea2-4ffb-86e9-d801bbd1c779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.486674602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c3e2b3b-2ea2-4ffb-86e9-d801bbd1c779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.486871470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c3e2b3b-2ea2-4ffb-86e9-d801bbd1c779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.515697097Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9dec32d9-8c90-47b1-83c7-68a28470d2d4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.516484015Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&PodSandboxMetadata{Name:busybox,Uid:d670143e-2580-40d9-a69c-b7623a37e199,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271331169708862,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:55:21.822552989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wr424,Uid:3ede65fe-7ab4-443f-8cae-a6ea4cd27985,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17122713296714494
19,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:55:21.822543391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3fd3b8000b0c0f30396188a0e9db4b248869b48a4448ee6e1d3b8f58644b968c,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-5q4ff,Uid:206d3fa3-2f7f-4852-860b-d9f00c868894,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271327913145876,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-5q4ff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 206d3fa3-2f7f-4852-860b-d9f00c868894,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:55:21.8
22550982Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&PodSandboxMetadata{Name:kube-proxy-zmx89,Uid:2d643ba1-44fb-4783-8d5b-df8a4c0f29fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271323053209274,Labels:map[string]string{controller-revision-hash: 97c89d47,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d5b-df8a4c0f29fa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-04T22:55:21.822547254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b0555d8c-489e-4265-9930-c8f4424cd77b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271323040119604,Labels:map[string]st
ring{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/c
onfig.seen: 2024-04-04T22:55:21.822535786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-024416,Uid:cb2ca7ca6192bfab1dd2064d90a78723,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271317321317206,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.77:8443,kubernetes.io/config.hash: cb2ca7ca6192bfab1dd2064d90a78723,kubernetes.io/config.seen: 2024-04-04T22:55:16.798752080Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&PodSandboxMetadata{Name
:kube-controller-manager-no-preload-024416,Uid:b36e0205c12bb19768a121c4e55c508b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271317302642454,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b36e0205c12bb19768a121c4e55c508b,kubernetes.io/config.seen: 2024-04-04T22:55:16.798753483Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-024416,Uid:cfef7834e5d0faa44021f99a114c5487,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271317298673073,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-024416
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.77:2379,kubernetes.io/config.hash: cfef7834e5d0faa44021f99a114c5487,kubernetes.io/config.seen: 2024-04-04T22:55:16.798745396Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-024416,Uid:eeaa5131fe0a844be66fa13bbad7df76,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712271317284886602,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eeaa5131fe0a844be66fa13bbad7df76,kubern
etes.io/config.seen: 2024-04-04T22:55:16.798755313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9dec32d9-8c90-47b1-83c7-68a28470d2d4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.517122454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15f173e6-469a-4ab7-8694-f79cb17a2abe name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.517178021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15f173e6-469a-4ab7-8694-f79cb17a2abe name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:51 no-preload-024416 crio[723]: time="2024-04-04 23:14:51.517388175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271355127234931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd77b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882004c8da33ff78ce32e21053270a83a295742d2c7c682bfd31d06c437fe1d5,PodSandboxId:37c98ea2e0c35a5c2510f9171c8b2fe4191be347df5671a38aee7c53f97c0820,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712271333611660421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d670143e-2580-40d9-a69c-b7623a37e199,},Annotations:map[string]string{io.kubernetes.container.hash: 873f5140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff,PodSandboxId:1a4d3439c1ebd0f1f9e8f108972b338cffee7828aba2fdff494ca25aa86f75fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271329898220433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ede65fe-7ab4-443f-8cae-a6ea4cd27985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a72cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451,PodSandboxId:023bdf3b3dea5ff985056149d2447a9d2543e48cf84a6d7aedfc89065586d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712271323423859564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmx89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d643ba1-44fb-4783-8d
5b-df8a4c0f29fa,},Annotations:map[string]string{io.kubernetes.container.hash: d364d278,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889,PodSandboxId:bc81d5b907b24f20286ef04e6b7c30542d56689a7a678b32a2bc0d75a9d5b3d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712271323424175554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0555d8c-489e-4265-9930-c8f4424cd7
7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6715b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915,PodSandboxId:896ee236d9b00ec75030b73a5e577f85352e6eaa7786a8000ef5998737894a4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712271317752355395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaa5131fe0a844be66fa13bbad7df76,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513,PodSandboxId:b8d66dc2b6cd6d2568e072b2cba5481d34627f9419dd835473e921d22d6daa64,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271317635453787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfef7834e5d0faa44021f99a114c5487,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 76ecd6c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904,PodSandboxId:d37b6c573061cefcb42c25719234bb9bcb57b73c3da6c4a2b91ea175461ef019,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712271317626857011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36e0205c12bb19768a121c4e55c508b,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38,PodSandboxId:5f39b041e61d15d81780fb2bf769f90d89b1d8f3216031648b97d3cad3524926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712271317550844313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-024416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ca7ca6192bfab1dd2064d90a78723,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f34bad78,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15f173e6-469a-4ab7-8694-f79cb17a2abe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11c58a1830991       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   bc81d5b907b24       storage-provisioner
	882004c8da33f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   37c98ea2e0c35       busybox
	b193f00fa4600       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   1a4d3439c1ebd       coredns-7db6d8ff4d-wr424
	608d21b5e121f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   bc81d5b907b24       storage-provisioner
	fb4517a71e257       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652                                      19 minutes ago      Running             kube-proxy                1                   023bdf3b3dea5       kube-proxy-zmx89
	d3b7424b0efb3       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5                                      19 minutes ago      Running             kube-scheduler            1                   896ee236d9b00       kube-scheduler-no-preload-024416
	edeb6b8feb7b1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   b8d66dc2b6cd6       etcd-no-preload-024416
	06183daed52cd       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a                                      19 minutes ago      Running             kube-controller-manager   1                   d37b6c573061c       kube-controller-manager-no-preload-024416
	ecfe112abbd47       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3                                      19 minutes ago      Running             kube-apiserver            1                   5f39b041e61d1       kube-apiserver-no-preload-024416
	
	
	==> coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35245 - 52393 "HINFO IN 7345092753685362976.4093367830504548005. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009638879s
	
	
	==> describe nodes <==
	Name:               no-preload-024416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-024416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=no-preload-024416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T22_46_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 22:46:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-024416
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 22:46:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 22:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.77
	  Hostname:    no-preload-024416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e91563183734070bb442c9a633fdfac
	  System UUID:                4e915631-8373-4070-bb44-2c9a633fdfac
	  Boot ID:                    86452d26-49f4-4443-9a9a-946a4639d8db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-wr424                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-no-preload-024416                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kube-apiserver-no-preload-024416             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-no-preload-024416    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-zmx89                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-no-preload-024416             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 metrics-server-569cc877fc-5q4ff              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                27m                kubelet          Node no-preload-024416 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node no-preload-024416 event: Registered Node no-preload-024416 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-024416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-024416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-024416 event: Registered Node no-preload-024416 in Controller
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054415] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045527] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.644506] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.835887] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.687223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.533607] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.057321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067447] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.191649] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.173953] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.403466] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Apr 4 22:55] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.064927] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.564858] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +6.615700] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.463062] systemd-fstab-generator[1975]: Ignoring "noauto" option for root device
	[  +1.593430] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.509957] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] <==
	{"level":"warn","ts":"2024-04-04T22:55:43.147018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.218599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4280"}
	{"level":"info","ts":"2024-04-04T22:55:43.147141Z","caller":"traceutil/trace.go:171","msg":"trace[1578574406] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:605; }","duration":"281.370385ms","start":"2024-04-04T22:55:42.865759Z","end":"2024-04-04T22:55:43.147129Z","steps":["trace[1578574406] 'agreement among raft nodes before linearized reading'  (duration: 281.079729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.147292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.349627635s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4280"}
	{"level":"info","ts":"2024-04-04T22:55:43.147344Z","caller":"traceutil/trace.go:171","msg":"trace[348841512] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:605; }","duration":"1.349703059s","start":"2024-04-04T22:55:41.797633Z","end":"2024-04-04T22:55:43.147336Z","steps":["trace[348841512] 'agreement among raft nodes before linearized reading'  (duration: 1.349604297s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.147369Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:41.797619Z","time spent":"1.34974223s","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4304,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.147446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.54968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b374c84bf1\" ","response":"range_response_count:1 size:817"}
	{"level":"info","ts":"2024-04-04T22:55:43.147495Z","caller":"traceutil/trace.go:171","msg":"trace[437692337] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b374c84bf1; range_end:; response_count:1; response_revision:605; }","duration":"280.620438ms","start":"2024-04-04T22:55:42.866867Z","end":"2024-04-04T22:55:43.147488Z","steps":["trace[437692337] 'agreement among raft nodes before linearized reading'  (duration: 280.43041ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T22:55:43.601417Z","caller":"traceutil/trace.go:171","msg":"trace[714666392] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"371.142343ms","start":"2024-04-04T22:55:43.230255Z","end":"2024-04-04T22:55:43.601397Z","steps":["trace[714666392] 'read index received'  (duration: 369.676254ms)","trace[714666392] 'applied index is now lower than readState.Index'  (duration: 1.464874ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T22:55:43.601761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.477238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-04-04T22:55:43.60233Z","caller":"traceutil/trace.go:171","msg":"trace[52618212] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e; range_end:; response_count:1; response_revision:607; }","duration":"372.107455ms","start":"2024-04-04T22:55:43.230208Z","end":"2024-04-04T22:55:43.602315Z","steps":["trace[52618212] 'agreement among raft nodes before linearized reading'  (duration: 371.402308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602409Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.230196Z","time spent":"372.197163ms","remote":"127.0.0.1:51050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":964,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-5q4ff.17c335b375b6d11e\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.601829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.322428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-04T22:55:43.60263Z","caller":"traceutil/trace.go:171","msg":"trace[46313904] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff; range_end:; response_count:1; response_revision:607; }","duration":"372.138829ms","start":"2024-04-04T22:55:43.230478Z","end":"2024-04-04T22:55:43.602617Z","steps":["trace[46313904] 'agreement among raft nodes before linearized reading'  (duration: 371.305566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.230473Z","time spent":"372.19806ms","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4260,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" "}
	{"level":"warn","ts":"2024-04-04T22:55:43.601923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.631656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T22:55:43.602863Z","caller":"traceutil/trace.go:171","msg":"trace[1549639602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"352.614577ms","start":"2024-04-04T22:55:43.250239Z","end":"2024-04-04T22:55:43.602853Z","steps":["trace[1549639602] 'agreement among raft nodes before linearized reading'  (duration: 351.664519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.602892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.250226Z","time spent":"352.657151ms","remote":"127.0.0.1:50952","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-04T22:55:43.601999Z","caller":"traceutil/trace.go:171","msg":"trace[341973304] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"446.329738ms","start":"2024-04-04T22:55:43.155618Z","end":"2024-04-04T22:55:43.601948Z","steps":["trace[341973304] 'process raft request'  (duration: 444.370786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T22:55:43.603214Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T22:55:43.155607Z","time spent":"447.553664ms","remote":"127.0.0.1:51180","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4221,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" mod_revision:574 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" value_size:4155 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5q4ff\" > >"}
	{"level":"info","ts":"2024-04-04T23:05:19.554681Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":845}
	{"level":"info","ts":"2024-04-04T23:05:19.57241Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":845,"took":"16.472745ms","hash":833007506,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-04T23:05:19.572534Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":833007506,"revision":845,"compact-revision":-1}
	{"level":"info","ts":"2024-04-04T23:10:19.564219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1087}
	{"level":"info","ts":"2024-04-04T23:10:19.569227Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1087,"took":"4.067862ms","hash":530557295,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-04T23:10:19.569324Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":530557295,"revision":1087,"compact-revision":845}
	
	
	==> kernel <==
	 23:14:51 up 20 min,  0 users,  load average: 0.04, 0.12, 0.13
	Linux no-preload-024416 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] <==
	I0404 23:08:22.381112       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:10:21.383315       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:10:21.383440       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:10:22.384033       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:10:22.384221       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:10:22.384251       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:10:22.384288       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:10:22.384357       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:10:22.385434       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:11:22.384942       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:11:22.385284       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:11:22.385343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:11:22.386031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:11:22.386148       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:11:22.387427       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:13:22.385815       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:13:22.385915       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:13:22.385925       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:13:22.388016       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:13:22.388138       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:13:22.388146       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] <==
	I0404 23:09:07.122171       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:09:36.595918       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:09:37.132394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:10:06.600826       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:10:07.142290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:10:36.605679       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:10:37.150042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:11:06.610998       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:07.157714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:11:31.880034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.41µs"
	E0404 23:11:36.616326       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:37.165164       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:11:42.881955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="236.177µs"
	E0404 23:12:06.621369       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:07.173795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:12:36.627133       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:37.181985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:06.632917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:07.192509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:36.638477       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:37.202713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:06.644724       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:07.211831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:36.652710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:37.222296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] <==
	I0404 22:55:24.416373       1 server_linux.go:69] "Using iptables proxy"
	I0404 22:55:24.463648       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.77"]
	I0404 22:55:24.551436       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0404 22:55:24.551555       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 22:55:24.551588       1 server_linux.go:165] "Using iptables Proxier"
	I0404 22:55:24.554803       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 22:55:24.555520       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0404 22:55:24.555572       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:55:24.558269       1 config.go:192] "Starting service config controller"
	I0404 22:55:24.558335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0404 22:55:24.558379       1 config.go:101] "Starting endpoint slice config controller"
	I0404 22:55:24.558402       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0404 22:55:24.559691       1 config.go:319] "Starting node config controller"
	I0404 22:55:24.562750       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0404 22:55:24.658915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0404 22:55:24.659031       1 shared_informer.go:320] Caches are synced for service config
	I0404 22:55:24.663225       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] <==
	I0404 22:55:19.201712       1 serving.go:380] Generated self-signed cert in-memory
	W0404 22:55:21.282147       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0404 22:55:21.282233       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0404 22:55:21.282264       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0404 22:55:21.282287       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0404 22:55:21.343623       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0404 22:55:21.343709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 22:55:21.347207       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0404 22:55:21.347263       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0404 22:55:21.348254       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0404 22:55:21.348357       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0404 22:55:21.380226       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 22:55:21.380331       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0404 22:55:22.948041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:12:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:12:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:12:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:12:22 no-preload-024416 kubelet[1351]: E0404 23:12:22.861712    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:12:36 no-preload-024416 kubelet[1351]: E0404 23:12:36.862099    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:12:47 no-preload-024416 kubelet[1351]: E0404 23:12:47.860900    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:12:59 no-preload-024416 kubelet[1351]: E0404 23:12:59.860954    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:13:12 no-preload-024416 kubelet[1351]: E0404 23:13:12.861390    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:13:16 no-preload-024416 kubelet[1351]: E0404 23:13:16.902593    1351 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 04 23:13:16 no-preload-024416 kubelet[1351]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:13:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:13:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:13:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:13:24 no-preload-024416 kubelet[1351]: E0404 23:13:24.860738    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:13:36 no-preload-024416 kubelet[1351]: E0404 23:13:36.863422    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:13:51 no-preload-024416 kubelet[1351]: E0404 23:13:51.862462    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:14:04 no-preload-024416 kubelet[1351]: E0404 23:14:04.861007    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]: E0404 23:14:16.861418    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]: E0404 23:14:16.901039    1351 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:14:16 no-preload-024416 kubelet[1351]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:14:28 no-preload-024416 kubelet[1351]: E0404 23:14:28.861422    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	Apr 04 23:14:42 no-preload-024416 kubelet[1351]: E0404 23:14:42.861233    1351 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5q4ff" podUID="206d3fa3-2f7f-4852-860b-d9f00c868894"
	
	
	==> storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] <==
	I0404 22:55:55.242192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 22:55:55.259233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 22:55:55.259437       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 22:56:12.661912       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 22:56:12.662159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c!
	I0404 22:56:12.666334       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d6efdd0-78db-41ac-b46f-f7e4d5ce265a", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c became leader
	I0404 22:56:12.763188       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-024416_4b0a16fa-d6d8-4348-8727-792f8e1c636c!
	
	
	==> storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] <==
	I0404 22:55:24.079479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0404 22:55:54.088392       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-024416 -n no-preload-024416
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-024416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5q4ff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff: exit status 1 (67.437562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5q4ff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-024416 describe pod metrics-server-569cc877fc-5q4ff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (358.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (351.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-04 23:15:51.842039596 +0000 UTC m=+6407.705025304
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-952083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.449µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-952083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-952083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-952083 logs -n 25: (1.419439325s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:14 UTC |
	| start   | -p newest-cni-037368 --memory=2200 --alsologtostderr   | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:14 UTC | 04 Apr 24 23:14 UTC |
	| delete  | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:15 UTC | 04 Apr 24 23:15 UTC |
	| addons  | enable metrics-server -p newest-cni-037368             | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:15 UTC | 04 Apr 24 23:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-037368                                   | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:15 UTC | 04 Apr 24 23:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-037368                  | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:15 UTC | 04 Apr 24 23:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-037368 --memory=2200 --alsologtostderr   | newest-cni-037368            | jenkins | v1.33.0-beta.0 | 04 Apr 24 23:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 23:15:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 23:15:41.192617   71049 out.go:291] Setting OutFile to fd 1 ...
	I0404 23:15:41.192875   71049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:15:41.192883   71049 out.go:304] Setting ErrFile to fd 2...
	I0404 23:15:41.192894   71049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 23:15:41.193108   71049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 23:15:41.193638   71049 out.go:298] Setting JSON to false
	I0404 23:15:41.194533   71049 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7087,"bootTime":1712265455,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 23:15:41.194591   71049 start.go:139] virtualization: kvm guest
	I0404 23:15:41.197157   71049 out.go:177] * [newest-cni-037368] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 23:15:41.198818   71049 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 23:15:41.200258   71049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 23:15:41.198875   71049 notify.go:220] Checking for updates...
	I0404 23:15:41.203131   71049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:15:41.204454   71049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 23:15:41.205755   71049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 23:15:41.206998   71049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 23:15:41.208599   71049 config.go:182] Loaded profile config "newest-cni-037368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 23:15:41.209041   71049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:15:41.209101   71049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:15:41.224387   71049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I0404 23:15:41.224742   71049 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:15:41.225293   71049 main.go:141] libmachine: Using API Version  1
	I0404 23:15:41.225313   71049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:15:41.225653   71049 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:15:41.225822   71049 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:15:41.226067   71049 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 23:15:41.226351   71049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:15:41.226402   71049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:15:41.241327   71049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0404 23:15:41.241726   71049 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:15:41.242157   71049 main.go:141] libmachine: Using API Version  1
	I0404 23:15:41.242178   71049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:15:41.242557   71049 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:15:41.242726   71049 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:15:41.277494   71049 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 23:15:41.278748   71049 start.go:297] selected driver: kvm2
	I0404 23:15:41.278759   71049 start.go:901] validating driver "kvm2" against &{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 23:15:41.278934   71049 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 23:15:41.279630   71049 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:15:41.279720   71049 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 23:15:41.294654   71049 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 23:15:41.294996   71049 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0404 23:15:41.295061   71049 cni.go:84] Creating CNI manager for ""
	I0404 23:15:41.295075   71049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:15:41.295113   71049 start.go:340] cluster config:
	{Name:newest-cni-037368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-037368 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 23:15:41.295211   71049 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 23:15:41.297211   71049 out.go:177] * Starting "newest-cni-037368" primary control-plane node in "newest-cni-037368" cluster
	I0404 23:15:41.298544   71049 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 23:15:41.298576   71049 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0404 23:15:41.298587   71049 cache.go:56] Caching tarball of preloaded images
	I0404 23:15:41.298660   71049 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 23:15:41.298671   71049 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0404 23:15:41.298764   71049 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/newest-cni-037368/config.json ...
	I0404 23:15:41.298938   71049 start.go:360] acquireMachinesLock for newest-cni-037368: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 23:15:41.298976   71049 start.go:364] duration metric: took 20.265µs to acquireMachinesLock for "newest-cni-037368"
	I0404 23:15:41.298989   71049 start.go:96] Skipping create...Using existing machine configuration
	I0404 23:15:41.298996   71049 fix.go:54] fixHost starting: 
	I0404 23:15:41.299246   71049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:15:41.299275   71049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:15:41.313672   71049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0404 23:15:41.314153   71049 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:15:41.314643   71049 main.go:141] libmachine: Using API Version  1
	I0404 23:15:41.314671   71049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:15:41.315117   71049 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:15:41.315309   71049 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	I0404 23:15:41.315458   71049 main.go:141] libmachine: (newest-cni-037368) Calling .GetState
	I0404 23:15:41.316964   71049 fix.go:112] recreateIfNeeded on newest-cni-037368: state=Stopped err=<nil>
	I0404 23:15:41.316990   71049 main.go:141] libmachine: (newest-cni-037368) Calling .DriverName
	W0404 23:15:41.317134   71049 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 23:15:41.319199   71049 out.go:177] * Restarting existing kvm2 VM for "newest-cni-037368" ...
	I0404 23:15:41.320692   71049 main.go:141] libmachine: (newest-cni-037368) Calling .Start
	I0404 23:15:41.320884   71049 main.go:141] libmachine: (newest-cni-037368) Ensuring networks are active...
	I0404 23:15:41.321616   71049 main.go:141] libmachine: (newest-cni-037368) Ensuring network default is active
	I0404 23:15:41.321928   71049 main.go:141] libmachine: (newest-cni-037368) Ensuring network mk-newest-cni-037368 is active
	I0404 23:15:41.322399   71049 main.go:141] libmachine: (newest-cni-037368) Getting domain xml...
	I0404 23:15:41.323098   71049 main.go:141] libmachine: (newest-cni-037368) Creating domain...
	I0404 23:15:42.528422   71049 main.go:141] libmachine: (newest-cni-037368) Waiting to get IP...
	I0404 23:15:42.529245   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:42.529674   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:42.529758   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:42.529662   71084 retry.go:31] will retry after 239.592987ms: waiting for machine to come up
	I0404 23:15:42.771186   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:42.771717   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:42.771747   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:42.771670   71084 retry.go:31] will retry after 334.656296ms: waiting for machine to come up
	I0404 23:15:43.108341   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:43.108821   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:43.108867   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:43.108764   71084 retry.go:31] will retry after 295.829757ms: waiting for machine to come up
	I0404 23:15:43.406449   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:43.406938   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:43.406965   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:43.406904   71084 retry.go:31] will retry after 564.865587ms: waiting for machine to come up
	I0404 23:15:43.973601   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:43.973958   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:43.974017   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:43.973940   71084 retry.go:31] will retry after 709.272483ms: waiting for machine to come up
	I0404 23:15:44.684790   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:44.685230   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:44.685274   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:44.685177   71084 retry.go:31] will retry after 591.760453ms: waiting for machine to come up
	I0404 23:15:45.278884   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:45.279436   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:45.279462   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:45.279383   71084 retry.go:31] will retry after 1.056490182s: waiting for machine to come up
	I0404 23:15:46.337507   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:46.337986   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:46.338014   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:46.337925   71084 retry.go:31] will retry after 1.355478867s: waiting for machine to come up
	I0404 23:15:47.695515   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:47.695949   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:47.695974   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:47.695904   71084 retry.go:31] will retry after 1.773490857s: waiting for machine to come up
	I0404 23:15:49.471503   71049 main.go:141] libmachine: (newest-cni-037368) DBG | domain newest-cni-037368 has defined MAC address 52:54:00:28:28:2c in network mk-newest-cni-037368
	I0404 23:15:49.472065   71049 main.go:141] libmachine: (newest-cni-037368) DBG | unable to find current IP address of domain newest-cni-037368 in network mk-newest-cni-037368
	I0404 23:15:49.472102   71049 main.go:141] libmachine: (newest-cni-037368) DBG | I0404 23:15:49.472000   71084 retry.go:31] will retry after 1.776327968s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.582385802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272552582350751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfd09635-49b7-4e07-b10c-f0100191c582 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.583224377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=574f8d6b-22d3-4605-83d3-ce4f44a20d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.583307134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574f8d6b-22d3-4605-83d3-ce4f44a20d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.583518602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=574f8d6b-22d3-4605-83d3-ce4f44a20d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.627188341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21de1d36-2c80-4c62-9096-29f64c7eaf11 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.627261807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21de1d36-2c80-4c62-9096-29f64c7eaf11 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.628429479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e791d3f4-3d4a-4617-a56f-fba922e2801e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.628950101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272552628925738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e791d3f4-3d4a-4617-a56f-fba922e2801e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.629891807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ddf54bc-3935-46f9-9c1e-b25e7dd8fb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.629978652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ddf54bc-3935-46f9-9c1e-b25e7dd8fb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.630259333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ddf54bc-3935-46f9-9c1e-b25e7dd8fb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.678079827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c38daf4-ff99-491f-8715-db06c8c3c718 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.678199068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c38daf4-ff99-491f-8715-db06c8c3c718 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.680670245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7bce9f0-e2ed-4dfc-834d-32311f04194c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.681968101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272552681933984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7bce9f0-e2ed-4dfc-834d-32311f04194c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.682625023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0e7ba1b-7e31-4f82-b3e8-198e16d02b53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.682700547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0e7ba1b-7e31-4f82-b3e8-198e16d02b53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.682952222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0e7ba1b-7e31-4f82-b3e8-198e16d02b53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.725213859Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f41b03e-285d-400e-bafd-1950a06f028a name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.725357014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f41b03e-285d-400e-bafd-1950a06f028a name=/runtime.v1.RuntimeService/Version
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.727167162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dca5331-eabd-46f5-b4b1-743368ea3ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.727874236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272552727839170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dca5331-eabd-46f5-b4b1-743368ea3ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.729615536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=189f4493-1cd0-4266-a6b2-5bb7353df387 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.729690690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=189f4493-1cd0-4266-a6b2-5bb7353df387 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:15:52 default-k8s-diff-port-952083 crio[731]: time="2024-04-04 23:15:52.730086699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b,PodSandboxId:1e606d410069f89d9744d74ebfe69285cdfceeb2a14222c7007e6949542c5cba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654931889621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vnzlh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acab1107-bd9a-4767-bbcd-705faf9e4dea,},Annotations:map[string]string{io.kubernetes.container.hash: 3f538ba1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853,PodSandboxId:defd4ff15641e0443d4b54476a8c54f5600aabd71ecb1f335f08455a8a895fff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712271654395577520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbw9b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a,},Annotations:map[string]string{io.kubernetes.container.hash: f39b82e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2,PodSandboxId:b74175d4d116a5b45f05955a31f1c0729f29fc8830814d7b2b4562a709d5b7a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712271654337286800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0b001dd3-825c-43ed-903d-669afc75f79c,},Annotations:map[string]string{io.kubernetes.container.hash: f29f837b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e,PodSandboxId:31228057f26cc126eb6d25ed5cd3e6da1cd9adbcd454e6cd4c8948a754c85e02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712271654246297493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-t2l7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc43d3e-d639-462b-81f1-d
4abcdcdbe91,},Annotations:map[string]string{io.kubernetes.container.hash: ee9bdf36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e,PodSandboxId:7120bb2ac9655e8b6115db33ed82891274880ca4f3461ccfb52963261f06bf83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712271633312950817
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9822b8713333441e2a7a7ef7e60a1807,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873,PodSandboxId:475c99bc935a4e8081c8011ca515c5a913621c13312a09cdb1b14267674ffc6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712271633343028797,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9190b1dfcc94d08c85e02314ffdfe51,},Annotations:map[string]string{io.kubernetes.container.hash: df443e8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188,PodSandboxId:310f3182fd0f5fc11ebc0570506c935682e9649cc37bda709210bf86cacf3b76,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712271633269622479,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afd80574e47ff311ef88779c9104c783,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d,PodSandboxId:5a065425e27e00ea7f3b076f2cdc70981843f8bd80597d3910e0d4aa1f8e7d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712271633233027342,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9,PodSandboxId:590a7c82d2f705da124fa2fb6f39452cf6e0e5523b4c90ed43552f0b9c4c2f56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712271344632664693,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-952083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c4dd72e0a1404b78b3fc33934e70a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9f1976,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=189f4493-1cd0-4266-a6b2-5bb7353df387 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fe0f596b810af       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   1e606d410069f       coredns-76f75df574-vnzlh
	9948bf2c9f2cb       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   14 minutes ago      Running             kube-proxy                0                   defd4ff15641e       kube-proxy-lbw9b
	7558f6eadded1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   b74175d4d116a       storage-provisioner
	667d376fb5c7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   31228057f26cc       coredns-76f75df574-t2l7m
	a93a3fad2e101       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   475c99bc935a4       etcd-default-k8s-diff-port-952083
	75c86ec55c075       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   15 minutes ago      Running             kube-scheduler            2                   7120bb2ac9655       kube-scheduler-default-k8s-diff-port-952083
	66f3e9fe1de46       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   15 minutes ago      Running             kube-controller-manager   2                   310f3182fd0f5       kube-controller-manager-default-k8s-diff-port-952083
	9291b35e905cd       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   15 minutes ago      Running             kube-apiserver            2                   5a065425e27e0       kube-apiserver-default-k8s-diff-port-952083
	c1b326420aa17       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   20 minutes ago      Exited              kube-apiserver            1                   590a7c82d2f70       kube-apiserver-default-k8s-diff-port-952083
	
	
	==> coredns [667d376fb5c7f9a9c062d0ac724ba8abc3a136d98b147e30764e17014a43484e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fe0f596b810aff117130a571543bc585e1604fcfc7afc61e786d1c037dbeb02b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-952083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-952083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a
	                    minikube.k8s.io/name=default-k8s-diff-port-952083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Apr 2024 23:00:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-952083
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Apr 2024 23:15:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 23:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Apr 2024 23:11:10 +0000   Thu, 04 Apr 2024 23:00:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.148
	  Hostname:    default-k8s-diff-port-952083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3649f3e14ef44d7e8df583f4502764e9
	  System UUID:                3649f3e1-4ef4-4d7e-8df5-83f4502764e9
	  Boot ID:                    9732efbd-d50a-4d8b-b568-3a2b2b2b3406
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-t2l7m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-vnzlh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-952083                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-952083             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-952083    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-lbw9b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-952083             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-szq42                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-952083 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-952083 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-952083 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-952083 event: Registered Node default-k8s-diff-port-952083 in Controller
	
	
	==> dmesg <==
	[  +0.059984] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.073712] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.129299] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.710553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.102621] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.061699] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074102] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.191005] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.148878] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.325050] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.713532] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.064544] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.679661] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +5.597929] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.436594] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 4 23:00] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.044940] systemd-fstab-generator[3619]: Ignoring "noauto" option for root device
	[  +4.511666] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.289982] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[ +13.894364] systemd-fstab-generator[4149]: Ignoring "noauto" option for root device
	[  +0.114354] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 4 23:01] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a93a3fad2e101dedf64206032499c316421fe3dbb2346f9f0f67a9b16b5ad873] <==
	{"level":"info","ts":"2024-04-04T23:00:33.885817Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-04T23:00:33.894011Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8362fc97c8dc7c","local-member-id":"ddd8c93e0466f1bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.886053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-04T23:00:33.889499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.148:2379"}
	{"level":"info","ts":"2024-04-04T23:00:33.896947Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.896995Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-04T23:00:33.898547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-04T23:00:33.898933Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-04T23:10:34.205971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-04-04T23:10:34.216249Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":717,"took":"9.717902ms","hash":2747205700,"current-db-size-bytes":2355200,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2355200,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-04T23:10:34.216343Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2747205700,"revision":717,"compact-revision":-1}
	{"level":"warn","ts":"2024-04-04T23:15:03.719475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"378.733648ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17419798750253858417 > lease_revoke:<id:71bf8eab57acbe25>","response":"size:27"}
	{"level":"info","ts":"2024-04-04T23:15:03.720127Z","caller":"traceutil/trace.go:171","msg":"trace[1137633840] linearizableReadLoop","detail":"{readStateIndex:1363; appliedIndex:1362; }","duration":"333.488352ms","start":"2024-04-04T23:15:03.386589Z","end":"2024-04-04T23:15:03.720078Z","steps":["trace[1137633840] 'read index received'  (duration: 36.436µs)","trace[1137633840] 'applied index is now lower than readState.Index'  (duration: 333.449749ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-04T23:15:03.720319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.685766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T23:15:03.720447Z","caller":"traceutil/trace.go:171","msg":"trace[594623208] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1179; }","duration":"333.881132ms","start":"2024-04-04T23:15:03.386554Z","end":"2024-04-04T23:15:03.720436Z","steps":["trace[594623208] 'agreement among raft nodes before linearized reading'  (duration: 333.6882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T23:15:03.720502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T23:15:03.38654Z","time spent":"333.942364ms","remote":"127.0.0.1:41080","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-04T23:15:03.720569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.080423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-04-04T23:15:03.720706Z","caller":"traceutil/trace.go:171","msg":"trace[1404400611] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1179; }","duration":"159.283505ms","start":"2024-04-04T23:15:03.561411Z","end":"2024-04-04T23:15:03.720694Z","steps":["trace[1404400611] 'agreement among raft nodes before linearized reading'  (duration: 159.090451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T23:15:03.720365Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.297716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-04T23:15:03.720911Z","caller":"traceutil/trace.go:171","msg":"trace[330345883] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1179; }","duration":"266.871245ms","start":"2024-04-04T23:15:03.45403Z","end":"2024-04-04T23:15:03.720901Z","steps":["trace[330345883] 'agreement among raft nodes before linearized reading'  (duration: 266.289127ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-04T23:15:04.101877Z","caller":"traceutil/trace.go:171","msg":"trace[1056102682] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"374.819303ms","start":"2024-04-04T23:15:03.727035Z","end":"2024-04-04T23:15:04.101854Z","steps":["trace[1056102682] 'process raft request'  (duration: 374.480514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-04T23:15:04.102841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-04T23:15:03.727017Z","time spent":"375.017055ms","remote":"127.0.0.1:41232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1178 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-04T23:15:34.213822Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":961}
	{"level":"info","ts":"2024-04-04T23:15:34.217953Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":961,"took":"3.621947ms","hash":3749548724,"current-db-size-bytes":2355200,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-04T23:15:34.218049Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3749548724,"revision":961,"compact-revision":717}
	
	
	==> kernel <==
	 23:15:53 up 20 min,  0 users,  load average: 0.23, 0.23, 0.22
	Linux default-k8s-diff-port-952083 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9291b35e905cd430b588af3320216cda3f60bc245e92ddd9bee68dad11121c4d] <==
	I0404 23:10:36.914766       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:11:36.914420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:11:36.914667       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:11:36.914701       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:11:36.915692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:11:36.915841       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:11:36.915881       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:13:36.915638       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:13:36.915784       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:13:36.915796       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:13:36.916132       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:13:36.916285       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:13:36.917835       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0404 23:15:35.919960       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:15:35.920094       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0404 23:15:36.920528       1 handler_proxy.go:93] no RequestInfo found in the context
	W0404 23:15:36.920544       1 handler_proxy.go:93] no RequestInfo found in the context
	E0404 23:15:36.920889       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0404 23:15:36.920927       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0404 23:15:36.921004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0404 23:15:36.922272       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c1b326420aa1703df382ccfa0814ed1c485db912b916d62fedac208e22833db9] <==
	W0404 23:00:24.995866       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.001605       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.020658       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.025492       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.027021       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.067356       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.148712       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.164299       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.197897       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.206022       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.234151       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.235546       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.251529       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.299194       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.313983       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.528111       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.566009       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:25.587433       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:26.206546       1 logging.go:59] [core] [Channel #196 SubChannel #197] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:28.656824       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.334604       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.369238       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.585116       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.613043       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0404 23:00:29.646115       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [66f3e9fe1de462bb096dd87f69962d3bec0e2eaf367bb2d48580049ddc9d4188] <==
	I0404 23:10:22.233506       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:10:51.728988       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:10:52.246538       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:11:21.734556       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:22.254647       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:11:51.740665       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:11:52.262994       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0404 23:11:56.316542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="174.724µs"
	I0404 23:12:09.318616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="155.423µs"
	E0404 23:12:21.746094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:22.271712       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:12:51.752878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:12:52.282000       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:21.758177       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:22.291131       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:13:51.764376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:13:52.300487       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:21.770558       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:22.309041       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:14:51.777097       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:14:52.318969       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:15:21.783073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:15:22.327568       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0404 23:15:51.789562       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0404 23:15:52.345624       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9948bf2c9f2cb8ecddf4ac62a62b91d29698209bcb7079973977921dd8ddb853] <==
	I0404 23:00:54.921444       1 server_others.go:72] "Using iptables proxy"
	I0404 23:00:54.964361       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.148"]
	I0404 23:00:55.080461       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0404 23:00:55.080610       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0404 23:00:55.080702       1 server_others.go:168] "Using iptables Proxier"
	I0404 23:00:55.084135       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0404 23:00:55.085113       1 server.go:865] "Version info" version="v1.29.3"
	I0404 23:00:55.085236       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0404 23:00:55.088468       1 config.go:188] "Starting service config controller"
	I0404 23:00:55.088572       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0404 23:00:55.088895       1 config.go:97] "Starting endpoint slice config controller"
	I0404 23:00:55.088967       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0404 23:00:55.089962       1 config.go:315] "Starting node config controller"
	I0404 23:00:55.091214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0404 23:00:55.092161       1 shared_informer.go:318] Caches are synced for node config
	I0404 23:00:55.189221       1 shared_informer.go:318] Caches are synced for service config
	I0404 23:00:55.190475       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [75c86ec55c075385f1ac8907e649841db16fa9e1f2638b4db7be807ae150805e] <==
	W0404 23:00:35.958619       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:35.958648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:36.762958       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0404 23:00:36.763027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0404 23:00:36.817605       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0404 23:00:36.817663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0404 23:00:36.973441       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0404 23:00:36.973488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0404 23:00:36.991267       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0404 23:00:36.991321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0404 23:00:36.992491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:36.992538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.017206       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0404 23:00:37.017295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0404 23:00:37.036058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0404 23:00:37.036149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0404 23:00:37.065471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:37.065523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.112123       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0404 23:00:37.112282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0404 23:00:37.196659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0404 23:00:37.196706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0404 23:00:37.432079       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0404 23:00:37.432133       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0404 23:00:40.241454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 04 23:13:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:13:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:13:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:13:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:13:43 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:13:43.301879    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:13:55 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:13:55.298593    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:14:06 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:06.298559    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:14:17 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:17.299314    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:14:28 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:28.299140    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:14:39 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:39.366601    3952 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:14:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:14:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:14:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:14:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:14:42 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:42.298450    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:14:55 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:14:55.298156    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:15:09 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:15:09.299617    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:15:21 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:15:21.300341    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:15:33 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:15:33.300717    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	Apr 04 23:15:39 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:15:39.365641    3952 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 04 23:15:39 default-k8s-diff-port-952083 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 04 23:15:39 default-k8s-diff-port-952083 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 04 23:15:39 default-k8s-diff-port-952083 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 04 23:15:39 default-k8s-diff-port-952083 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 04 23:15:44 default-k8s-diff-port-952083 kubelet[3952]: E0404 23:15:44.299122    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-szq42" podUID="23572301-f885-4efd-bbd9-0931b448184f"
	
	
	==> storage-provisioner [7558f6eadded1c96ef90c16f63ed51c5e229bc4eaa7423972fc578c1f292ecf2] <==
	I0404 23:00:54.630574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0404 23:00:54.652942       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0404 23:00:54.653256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0404 23:00:54.683495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0404 23:00:54.685178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f!
	I0404 23:00:54.688291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfa93024-1c7d-427e-8f35-daa7a4fc8fec", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f became leader
	I0404 23:00:54.785662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-952083_0ac4accc-ee3e-4fa4-aa9e-841c3ce0eb6f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-szq42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42: exit status 1 (77.492777ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-szq42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-952083 describe pod metrics-server-57f55c9bc5-szq42: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (351.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:13:09.142712   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:13:48.669973   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:13:50.479819   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0404 23:14:13.464523   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (250.92216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-343162" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-343162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-343162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.164µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-343162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (249.246972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-343162 logs -n 25: (1.646731382s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-063570 sudo cat                              | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo                                  | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo find                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-063570 sudo crio                             | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-063570                                       | bridge-063570                | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-443615 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:45 UTC |
	|         | disable-driver-mounts-443615                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:45 UTC | 04 Apr 24 22:46 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-952083  | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-143118            | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-024416             | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC | 04 Apr 24 22:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-343162        | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-952083       | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-952083 | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 23:00 UTC |
	|         | default-k8s-diff-port-952083                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-143118                 | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-143118                                  | embed-certs-143118           | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-024416                  | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-024416                                   | no-preload-024416            | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-343162             | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC | 04 Apr 24 22:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-343162                              | old-k8s-version-343162       | jenkins | v1.33.0-beta.0 | 04 Apr 24 22:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 22:50:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 22:50:56.470398   65393 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:50:56.470518   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470527   65393 out.go:304] Setting ErrFile to fd 2...
	I0404 22:50:56.470531   65393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:50:56.470702   65393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:50:56.471207   65393 out.go:298] Setting JSON to false
	I0404 22:50:56.472107   65393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5602,"bootTime":1712265455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:50:56.472206   65393 start.go:139] virtualization: kvm guest
	I0404 22:50:56.474636   65393 out.go:177] * [old-k8s-version-343162] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:50:56.477086   65393 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:50:56.477116   65393 notify.go:220] Checking for updates...
	I0404 22:50:56.479875   65393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:50:56.481381   65393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:50:56.482598   65393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:50:56.483896   65393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:50:56.485276   65393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:50:56.487030   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:50:56.487414   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.487471   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.502537   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0404 22:50:56.502977   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.503568   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.503592   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.503915   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.504146   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.506219   65393 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0404 22:50:56.507513   65393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:50:56.507838   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:50:56.507872   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:50:56.522700   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0404 22:50:56.523202   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:50:56.523675   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:50:56.523702   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:50:56.524012   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:50:56.524213   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:50:56.560806   65393 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 22:50:56.562468   65393 start.go:297] selected driver: kvm2
	I0404 22:50:56.562485   65393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.562593   65393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:50:56.563382   65393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.563463   65393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 22:50:56.579804   65393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 22:50:56.580216   65393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:50:56.580311   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:50:56.580325   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:50:56.580361   65393 start.go:340] cluster config:
	{Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:50:56.580508   65393 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 22:50:56.582753   65393 out.go:177] * Starting "old-k8s-version-343162" primary control-plane node in "old-k8s-version-343162" cluster
	I0404 22:50:55.940353   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:50:56.584381   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:50:56.584440   65393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 22:50:56.584451   65393 cache.go:56] Caching tarball of preloaded images
	I0404 22:50:56.584585   65393 preload.go:173] Found /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0404 22:50:56.584616   65393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 22:50:56.584754   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:50:56.585041   65393 start.go:360] acquireMachinesLock for old-k8s-version-343162: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:50:59.012433   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:05.092380   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:08.164456   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:14.244461   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:17.316477   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:23.396389   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:26.468349   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:32.548445   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:35.620422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:41.700386   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:44.772391   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:50.852426   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:51:53.924460   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:00.004442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:03.076397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:09.156418   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:12.228432   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:18.308468   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:21.380441   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:27.460405   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:30.532470   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:36.612450   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:39.684473   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:45.764397   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:48.836435   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:54.916369   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:52:57.992400   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:04.068476   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:07.140457   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:13.220492   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:16.292379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:22.372422   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:25.444444   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:31.524421   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:34.596378   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:40.676452   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:43.748475   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:49.828481   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:52.900427   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:53:58.980363   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:02.052442   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:08.132379   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:11.204438   64791 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.148:22: connect: no route to host
	I0404 22:54:14.209023   64902 start.go:364] duration metric: took 4m27.708792236s to acquireMachinesLock for "embed-certs-143118"
	I0404 22:54:14.209073   64902 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:14.209081   64902 fix.go:54] fixHost starting: 
	I0404 22:54:14.209454   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:14.209493   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:14.224780   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0404 22:54:14.225202   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:14.225774   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:14.225796   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:14.226086   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:14.226268   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:14.226381   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:14.227982   64902 fix.go:112] recreateIfNeeded on embed-certs-143118: state=Stopped err=<nil>
	I0404 22:54:14.228030   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	W0404 22:54:14.228195   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:14.229974   64902 out.go:177] * Restarting existing kvm2 VM for "embed-certs-143118" ...
	I0404 22:54:14.231465   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Start
	I0404 22:54:14.231647   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring networks are active...
	I0404 22:54:14.232454   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network default is active
	I0404 22:54:14.232844   64902 main.go:141] libmachine: (embed-certs-143118) Ensuring network mk-embed-certs-143118 is active
	I0404 22:54:14.233268   64902 main.go:141] libmachine: (embed-certs-143118) Getting domain xml...
	I0404 22:54:14.234059   64902 main.go:141] libmachine: (embed-certs-143118) Creating domain...
	I0404 22:54:15.443570   64902 main.go:141] libmachine: (embed-certs-143118) Waiting to get IP...
	I0404 22:54:15.444501   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.444881   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.444974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.444857   65866 retry.go:31] will retry after 235.384261ms: waiting for machine to come up
	I0404 22:54:15.682336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.682843   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.682889   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.682802   65866 retry.go:31] will retry after 298.217645ms: waiting for machine to come up
	I0404 22:54:15.982346   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:15.982793   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:15.982822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:15.982737   65866 retry.go:31] will retry after 388.227781ms: waiting for machine to come up
	I0404 22:54:16.372259   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.372758   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.372783   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.372709   65866 retry.go:31] will retry after 455.494549ms: waiting for machine to come up
	I0404 22:54:14.206346   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:14.206380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206696   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:54:14.206723   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:54:14.206981   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:54:14.208882   64791 machine.go:97] duration metric: took 4m37.40831033s to provisionDockerMachine
	I0404 22:54:14.208934   64791 fix.go:56] duration metric: took 4m37.430672573s for fixHost
	I0404 22:54:14.208943   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 4m37.430710876s
	W0404 22:54:14.208977   64791 start.go:713] error starting host: provision: host is not running
	W0404 22:54:14.209074   64791 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0404 22:54:14.209085   64791 start.go:728] Will try again in 5 seconds ...
	I0404 22:54:16.829381   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:16.829863   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:16.829895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:16.829758   65866 retry.go:31] will retry after 476.920945ms: waiting for machine to come up
	I0404 22:54:17.308558   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:17.309233   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:17.309260   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:17.309155   65866 retry.go:31] will retry after 723.322819ms: waiting for machine to come up
	I0404 22:54:18.034138   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.034605   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.034633   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.034559   65866 retry.go:31] will retry after 858.492179ms: waiting for machine to come up
	I0404 22:54:18.894172   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:18.894590   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:18.894621   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:18.894541   65866 retry.go:31] will retry after 1.243998506s: waiting for machine to come up
	I0404 22:54:20.140387   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:20.140872   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:20.140906   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:20.140777   65866 retry.go:31] will retry after 1.245446322s: waiting for machine to come up
	I0404 22:54:21.388210   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:21.388626   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:21.388653   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:21.388588   65866 retry.go:31] will retry after 2.315520772s: waiting for machine to come up
	I0404 22:54:19.210605   64791 start.go:360] acquireMachinesLock for default-k8s-diff-port-952083: {Name:mk040ceb559ac497d91e9eaa910f4092c32a416a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0404 22:54:23.707374   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:23.707938   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:23.707971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:23.707895   65866 retry.go:31] will retry after 1.925131112s: waiting for machine to come up
	I0404 22:54:25.635778   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:25.636251   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:25.636281   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:25.636192   65866 retry.go:31] will retry after 3.393560306s: waiting for machine to come up
	I0404 22:54:29.033804   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:29.034284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | unable to find current IP address of domain embed-certs-143118 in network mk-embed-certs-143118
	I0404 22:54:29.034311   64902 main.go:141] libmachine: (embed-certs-143118) DBG | I0404 22:54:29.034223   65866 retry.go:31] will retry after 4.387913748s: waiting for machine to come up
	I0404 22:54:34.941563   65047 start.go:364] duration metric: took 4m31.283417073s to acquireMachinesLock for "no-preload-024416"
	I0404 22:54:34.941648   65047 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:34.941658   65047 fix.go:54] fixHost starting: 
	I0404 22:54:34.942144   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:34.942187   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:34.959447   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0404 22:54:34.959938   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:34.960536   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:54:34.960563   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:34.960909   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:34.961137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:34.961305   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:54:34.963158   65047 fix.go:112] recreateIfNeeded on no-preload-024416: state=Stopped err=<nil>
	I0404 22:54:34.963183   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	W0404 22:54:34.963366   65047 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:34.965339   65047 out.go:177] * Restarting existing kvm2 VM for "no-preload-024416" ...
	I0404 22:54:33.427303   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427873   64902 main.go:141] libmachine: (embed-certs-143118) Found IP for machine: 192.168.61.137
	I0404 22:54:33.427903   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has current primary IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.427916   64902 main.go:141] libmachine: (embed-certs-143118) Reserving static IP address...
	I0404 22:54:33.428384   64902 main.go:141] libmachine: (embed-certs-143118) Reserved static IP address: 192.168.61.137
	I0404 22:54:33.428436   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.428454   64902 main.go:141] libmachine: (embed-certs-143118) Waiting for SSH to be available...
	I0404 22:54:33.428483   64902 main.go:141] libmachine: (embed-certs-143118) DBG | skip adding static IP to network mk-embed-certs-143118 - found existing host DHCP lease matching {name: "embed-certs-143118", mac: "52:54:00:c1:29:65", ip: "192.168.61.137"}
	I0404 22:54:33.428496   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Getting to WaitForSSH function...
	I0404 22:54:33.430650   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.430971   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.430999   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.431167   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH client type: external
	I0404 22:54:33.431187   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa (-rw-------)
	I0404 22:54:33.431213   64902 main.go:141] libmachine: (embed-certs-143118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:33.431231   64902 main.go:141] libmachine: (embed-certs-143118) DBG | About to run SSH command:
	I0404 22:54:33.431247   64902 main.go:141] libmachine: (embed-certs-143118) DBG | exit 0
	I0404 22:54:33.556780   64902 main.go:141] libmachine: (embed-certs-143118) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:33.557184   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetConfigRaw
	I0404 22:54:33.557821   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.560786   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561122   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.561156   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.561482   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/config.json ...
	I0404 22:54:33.561714   64902 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:33.561738   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:33.562027   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.564494   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564777   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.564800   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.564996   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.565203   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565364   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.565525   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.565660   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.565840   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.565851   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:33.672777   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:33.672807   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673051   64902 buildroot.go:166] provisioning hostname "embed-certs-143118"
	I0404 22:54:33.673072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.673221   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.675895   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676276   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.676302   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.676441   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.676631   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676783   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.676920   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.677099   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.677291   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.677306   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-143118 && echo "embed-certs-143118" | sudo tee /etc/hostname
	I0404 22:54:33.800210   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-143118
	
	I0404 22:54:33.800234   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.802902   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803284   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.803313   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.803464   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:33.803699   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.803917   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:33.804130   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:33.804307   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:33.804477   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:33.804493   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-143118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-143118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-143118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:33.922754   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:33.922787   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:33.922810   64902 buildroot.go:174] setting up certificates
	I0404 22:54:33.922821   64902 provision.go:84] configureAuth start
	I0404 22:54:33.922829   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetMachineName
	I0404 22:54:33.923168   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:33.926018   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926349   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.926376   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.926536   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:33.928860   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929202   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:33.929225   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:33.929405   64902 provision.go:143] copyHostCerts
	I0404 22:54:33.929464   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:33.929474   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:33.929538   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:33.929623   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:33.929631   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:33.929654   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:33.929705   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:33.929712   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:33.929733   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:33.929781   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.embed-certs-143118 san=[127.0.0.1 192.168.61.137 embed-certs-143118 localhost minikube]
	I0404 22:54:34.248318   64902 provision.go:177] copyRemoteCerts
	I0404 22:54:34.248368   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:34.248392   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.251549   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.251969   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.252005   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.252162   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.252415   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.252592   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.252806   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.340287   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:54:34.367083   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:34.392473   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:34.418850   64902 provision.go:87] duration metric: took 496.019024ms to configureAuth
	I0404 22:54:34.418876   64902 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:34.419050   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:34.419118   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.422414   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422794   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.422822   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.422989   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.423196   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423395   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.423548   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.423706   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.423903   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.423926   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:34.698283   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:34.698315   64902 machine.go:97] duration metric: took 1.136579802s to provisionDockerMachine
	I0404 22:54:34.698330   64902 start.go:293] postStartSetup for "embed-certs-143118" (driver="kvm2")
	I0404 22:54:34.698343   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:34.698362   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.698738   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:34.698767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.701491   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.701869   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.701899   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.702062   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.702269   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.702410   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.702580   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.787376   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:34.791940   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:34.791969   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:34.792032   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:34.792113   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:34.792228   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:34.801800   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:34.828107   64902 start.go:296] duration metric: took 129.762377ms for postStartSetup
	I0404 22:54:34.828175   64902 fix.go:56] duration metric: took 20.61909326s for fixHost
	I0404 22:54:34.828200   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.831336   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.831914   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.831939   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.832211   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.832443   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832725   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.832911   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.833074   64902 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:34.833242   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0404 22:54:34.833253   64902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:34.941376   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271274.913625729
	
	I0404 22:54:34.941401   64902 fix.go:216] guest clock: 1712271274.913625729
	I0404 22:54:34.941409   64902 fix.go:229] Guest: 2024-04-04 22:54:34.913625729 +0000 UTC Remote: 2024-04-04 22:54:34.828180786 +0000 UTC m=+288.480480037 (delta=85.444943ms)
	I0404 22:54:34.941435   64902 fix.go:200] guest clock delta is within tolerance: 85.444943ms
	I0404 22:54:34.941442   64902 start.go:83] releasing machines lock for "embed-certs-143118", held for 20.732383788s
	I0404 22:54:34.941472   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.941770   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:34.944521   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.944943   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.944973   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.945137   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945761   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.945994   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:34.946079   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:34.946123   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.946244   64902 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:34.946271   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:34.948974   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949059   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949433   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949468   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:34.949503   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:34.949597   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949832   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:34.949893   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950007   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:34.950094   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950167   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:34.950239   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:34.950311   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:35.061691   64902 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:35.068941   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:35.222979   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:35.230776   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:35.230861   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:35.250962   64902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:35.250993   64902 start.go:494] detecting cgroup driver to use...
	I0404 22:54:35.251078   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:35.270027   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:35.286582   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:35.286642   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:35.305465   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:35.323473   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:35.448815   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:35.601864   64902 docker.go:233] disabling docker service ...
	I0404 22:54:35.602013   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:35.621617   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:35.638755   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:35.784051   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:35.909763   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:35.925820   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:35.946492   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:35.946555   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.958517   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:35.958584   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.971470   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.983820   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:35.996000   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:36.009730   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.022318   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.042685   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:36.054710   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:36.066952   64902 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:36.067023   64902 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:36.083843   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:36.096564   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:36.228943   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:36.376198   64902 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:36.376276   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:36.382097   64902 start.go:562] Will wait 60s for crictl version
	I0404 22:54:36.382176   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:54:36.386651   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:36.425845   64902 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:36.425931   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.456397   64902 ssh_runner.go:195] Run: crio --version
	I0404 22:54:36.491436   64902 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:54:34.967133   65047 main.go:141] libmachine: (no-preload-024416) Calling .Start
	I0404 22:54:34.967354   65047 main.go:141] libmachine: (no-preload-024416) Ensuring networks are active...
	I0404 22:54:34.968261   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network default is active
	I0404 22:54:34.968700   65047 main.go:141] libmachine: (no-preload-024416) Ensuring network mk-no-preload-024416 is active
	I0404 22:54:34.969252   65047 main.go:141] libmachine: (no-preload-024416) Getting domain xml...
	I0404 22:54:34.969956   65047 main.go:141] libmachine: (no-preload-024416) Creating domain...
	I0404 22:54:36.226773   65047 main.go:141] libmachine: (no-preload-024416) Waiting to get IP...
	I0404 22:54:36.227926   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.228451   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.228535   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.228427   65995 retry.go:31] will retry after 247.657069ms: waiting for machine to come up
	I0404 22:54:36.477997   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.478543   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.478576   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.478489   65995 retry.go:31] will retry after 249.517341ms: waiting for machine to come up
	I0404 22:54:36.730176   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:36.730636   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:36.730665   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:36.730591   65995 retry.go:31] will retry after 476.552832ms: waiting for machine to come up
	I0404 22:54:37.209208   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.209677   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.209709   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.209661   65995 retry.go:31] will retry after 435.547363ms: waiting for machine to come up
	I0404 22:54:37.647412   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:37.647985   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:37.648014   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:37.647932   65995 retry.go:31] will retry after 506.589084ms: waiting for machine to come up
	I0404 22:54:38.155673   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:38.156207   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:38.156255   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:38.156171   65995 retry.go:31] will retry after 890.290504ms: waiting for machine to come up
	I0404 22:54:36.493249   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetIP
	I0404 22:54:36.496475   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.496905   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:36.496937   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:36.497232   64902 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:36.502139   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:36.517272   64902 kubeadm.go:877] updating cluster {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:36.517432   64902 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:54:36.517497   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:36.564031   64902 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:54:36.564108   64902 ssh_runner.go:195] Run: which lz4
	I0404 22:54:36.568713   64902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:54:36.573960   64902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:54:36.574006   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:54:38.239579   64902 crio.go:462] duration metric: took 1.670908361s to copy over tarball
	I0404 22:54:38.239657   64902 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:54:40.665443   64902 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425757745s)
	I0404 22:54:40.665487   64902 crio.go:469] duration metric: took 2.425877834s to extract the tarball
	I0404 22:54:40.665496   64902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:54:40.703772   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:40.754221   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:54:40.754248   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:54:40.754256   64902 kubeadm.go:928] updating node { 192.168.61.137 8443 v1.29.3 crio true true} ...
	I0404 22:54:40.754363   64902 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-143118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:54:40.754464   64902 ssh_runner.go:195] Run: crio config
	I0404 22:54:40.811854   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:40.811888   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:40.811906   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:54:40.811936   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-143118 NodeName:embed-certs-143118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:54:40.812177   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-143118"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:54:40.812313   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:54:40.824209   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:54:40.824295   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:54:40.835585   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0404 22:54:40.854777   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:54:40.875571   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0404 22:54:40.896639   64902 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0404 22:54:40.901267   64902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:40.916843   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:41.050188   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:41.070935   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118 for IP: 192.168.61.137
	I0404 22:54:41.070957   64902 certs.go:194] generating shared ca certs ...
	I0404 22:54:41.070972   64902 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:41.071132   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:54:41.071191   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:54:41.071205   64902 certs.go:256] generating profile certs ...
	I0404 22:54:41.071322   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/client.key
	I0404 22:54:41.071399   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key.e7f8ac5b
	I0404 22:54:41.071435   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key
	I0404 22:54:41.071553   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:54:41.071585   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:54:41.071596   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:54:41.071624   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:54:41.071649   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:54:41.071670   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:54:41.071725   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:41.072445   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:54:41.108370   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:54:41.142072   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:54:41.179263   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:54:41.226769   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0404 22:54:41.273570   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:54:41.306526   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:54:41.336764   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/embed-certs-143118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:54:41.367053   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:54:39.048106   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.048587   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.048616   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.048540   65995 retry.go:31] will retry after 946.742057ms: waiting for machine to come up
	I0404 22:54:39.997241   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:39.997737   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:39.997774   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:39.997679   65995 retry.go:31] will retry after 1.053079472s: waiting for machine to come up
	I0404 22:54:41.052284   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:41.052796   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:41.052835   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:41.052762   65995 retry.go:31] will retry after 1.551456209s: waiting for machine to come up
	I0404 22:54:42.606789   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:42.607297   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:42.607335   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:42.607228   65995 retry.go:31] will retry after 2.022953695s: waiting for machine to come up
	I0404 22:54:41.395449   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:54:41.549507   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:54:41.578394   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:54:41.597960   64902 ssh_runner.go:195] Run: openssl version
	I0404 22:54:41.604217   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:54:41.618340   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623568   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.623626   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:54:41.630781   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:54:41.644981   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:54:41.657992   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663188   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.663242   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:54:41.670992   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:54:41.683812   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:54:41.696848   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702441   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.702499   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:54:41.709270   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:54:41.722509   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:54:41.727688   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:54:41.734456   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:54:41.741006   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:54:41.748106   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:54:41.754559   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:54:41.761107   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:54:41.768416   64902 kubeadm.go:391] StartCluster: {Name:embed-certs-143118 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-143118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:54:41.768497   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:54:41.768557   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.815664   64902 cri.go:89] found id: ""
	I0404 22:54:41.815737   64902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:54:41.829255   64902 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:54:41.829283   64902 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:54:41.829290   64902 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:54:41.829333   64902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:54:41.842482   64902 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:54:41.843868   64902 kubeconfig.go:125] found "embed-certs-143118" server: "https://192.168.61.137:8443"
	I0404 22:54:41.846707   64902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:54:41.858505   64902 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.137
	I0404 22:54:41.858544   64902 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:54:41.858558   64902 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:54:41.858616   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:54:41.905608   64902 cri.go:89] found id: ""
	I0404 22:54:41.905686   64902 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:54:41.928336   64902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:54:41.939893   64902 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:54:41.939913   64902 kubeadm.go:156] found existing configuration files:
	
	I0404 22:54:41.939967   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:54:41.950100   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:54:41.950159   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:54:41.961297   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:54:41.972267   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:54:41.972350   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:54:41.983395   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:54:41.994162   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:54:41.994256   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:54:42.005118   64902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:54:42.015514   64902 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:54:42.015583   64902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:54:42.027013   64902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:54:42.037495   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:42.144612   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.301722   64902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157071112s)
	I0404 22:54:43.301751   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.527881   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.621621   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:43.708816   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:54:43.708949   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.209626   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.709138   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:54:44.734681   64902 api_server.go:72] duration metric: took 1.025863443s to wait for apiserver process to appear ...
	I0404 22:54:44.734715   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:54:44.734742   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.735296   64902 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0404 22:54:45.235123   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:44.632681   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:44.633163   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:44.633192   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:44.633124   65995 retry.go:31] will retry after 2.627056472s: waiting for machine to come up
	I0404 22:54:47.262212   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:47.262588   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:47.262618   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:47.262539   65995 retry.go:31] will retry after 3.141452547s: waiting for machine to come up
	I0404 22:54:47.737311   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.737339   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:47.737356   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:47.782485   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:54:47.782519   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:54:48.235046   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.240034   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.240068   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:48.735331   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:48.745583   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:54:48.745624   64902 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:54:49.235513   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:54:49.239826   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:54:49.248617   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:54:49.248647   64902 api_server.go:131] duration metric: took 4.51392228s to wait for apiserver health ...
	I0404 22:54:49.248655   64902 cni.go:84] Creating CNI manager for ""
	I0404 22:54:49.248662   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:54:49.250501   64902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:54:49.252206   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:54:49.271104   64902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:54:49.297257   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:54:49.309692   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:54:49.309726   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:54:49.309733   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:54:49.309741   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:54:49.309746   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:54:49.309751   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:54:49.309756   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:54:49.309764   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:54:49.309768   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:54:49.309775   64902 system_pods.go:74] duration metric: took 12.491664ms to wait for pod list to return data ...
	I0404 22:54:49.309782   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:54:49.315864   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:54:49.315890   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:54:49.315901   64902 node_conditions.go:105] duration metric: took 6.114458ms to run NodePressure ...
	I0404 22:54:49.315926   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:54:49.610849   64902 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615336   64902 kubeadm.go:733] kubelet initialised
	I0404 22:54:49.615356   64902 kubeadm.go:734] duration metric: took 4.484086ms waiting for restarted kubelet to initialise ...
	I0404 22:54:49.615364   64902 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:49.621224   64902 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.626963   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.626988   64902 pod_ready.go:81] duration metric: took 5.735991ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.626996   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "coredns-76f75df574-9qh9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.627002   64902 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.633596   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633620   64902 pod_ready.go:81] duration metric: took 6.610566ms for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.633628   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "etcd-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.633634   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.639137   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639168   64902 pod_ready.go:81] duration metric: took 5.51865ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.639177   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.639183   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:49.701443   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701475   64902 pod_ready.go:81] duration metric: took 62.283656ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:49.701488   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:49.701497   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.100946   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100974   64902 pod_ready.go:81] duration metric: took 399.467513ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.100984   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-proxy-psst7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.100990   64902 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.501078   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501115   64902 pod_ready.go:81] duration metric: took 400.110991ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.501126   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.501135   64902 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:50.902773   64902 pod_ready.go:97] node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902804   64902 pod_ready.go:81] duration metric: took 401.658155ms for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:54:50.902817   64902 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-143118" hosting pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:50.902826   64902 pod_ready.go:38] duration metric: took 1.287454227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:50.902842   64902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:54:50.915531   64902 ops.go:34] apiserver oom_adj: -16
	I0404 22:54:50.915553   64902 kubeadm.go:591] duration metric: took 9.086257382s to restartPrimaryControlPlane
	I0404 22:54:50.915562   64902 kubeadm.go:393] duration metric: took 9.147152807s to StartCluster
	I0404 22:54:50.915580   64902 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.915663   64902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:54:50.917278   64902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:54:50.917558   64902 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:54:50.919536   64902 out.go:177] * Verifying Kubernetes components...
	I0404 22:54:50.917632   64902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:54:50.917801   64902 config.go:182] Loaded profile config "embed-certs-143118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:54:50.921079   64902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-143118"
	I0404 22:54:50.921090   64902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-143118"
	I0404 22:54:50.921093   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:50.921114   64902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-143118"
	I0404 22:54:50.921138   64902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-143118"
	W0404 22:54:50.921153   64902 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:54:50.921079   64902 addons.go:69] Setting metrics-server=true in profile "embed-certs-143118"
	I0404 22:54:50.921195   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921217   64902 addons.go:234] Setting addon metrics-server=true in "embed-certs-143118"
	W0404 22:54:50.921232   64902 addons.go:243] addon metrics-server should already be in state true
	I0404 22:54:50.921254   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.921467   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921507   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921683   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921709   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.921753   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.921781   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.937216   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0404 22:54:50.937299   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0404 22:54:50.937762   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.937802   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.938291   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938313   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938321   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.938333   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.938661   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.938670   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.939249   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939271   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.939300   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.939311   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.941042   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0404 22:54:50.941525   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.942072   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.942100   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.942499   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.942729   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.946461   64902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-143118"
	W0404 22:54:50.946482   64902 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:54:50.946510   64902 host.go:66] Checking if "embed-certs-143118" exists ...
	I0404 22:54:50.946882   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.946915   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.955518   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0404 22:54:50.955557   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0404 22:54:50.955998   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956052   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.956571   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956600   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956720   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.956747   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.956986   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957070   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.957190   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.957232   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.959071   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.959241   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.961349   64902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:50.963021   64902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:54:50.964576   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:54:50.964596   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:54:50.963102   64902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:50.964618   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.964632   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:54:50.964657   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.965167   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0404 22:54:50.965591   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.966202   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.966227   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.966554   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.967093   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:50.967129   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:50.968169   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968490   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968564   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.968589   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.968740   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.968871   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969009   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969078   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.969104   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.969280   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.969336   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.969576   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.969732   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.969883   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:50.985964   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0404 22:54:50.986398   64902 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:50.986935   64902 main.go:141] libmachine: Using API Version  1
	I0404 22:54:50.986961   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:50.987432   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:50.987635   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetState
	I0404 22:54:50.989705   64902 main.go:141] libmachine: (embed-certs-143118) Calling .DriverName
	I0404 22:54:50.989956   64902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:50.989970   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:54:50.989984   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHHostname
	I0404 22:54:50.993058   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993505   64902 main.go:141] libmachine: (embed-certs-143118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:29:65", ip: ""} in network mk-embed-certs-143118: {Iface:virbr3 ExpiryTime:2024-04-04 23:54:25 +0000 UTC Type:0 Mac:52:54:00:c1:29:65 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:embed-certs-143118 Clientid:01:52:54:00:c1:29:65}
	I0404 22:54:50.993540   64902 main.go:141] libmachine: (embed-certs-143118) DBG | domain embed-certs-143118 has defined IP address 192.168.61.137 and MAC address 52:54:00:c1:29:65 in network mk-embed-certs-143118
	I0404 22:54:50.993703   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHPort
	I0404 22:54:50.993916   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHKeyPath
	I0404 22:54:50.994072   64902 main.go:141] libmachine: (embed-certs-143118) Calling .GetSSHUsername
	I0404 22:54:50.994255   64902 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/embed-certs-143118/id_rsa Username:docker}
	I0404 22:54:51.108711   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:54:51.128024   64902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:51.190119   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:54:51.199620   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:54:51.199648   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:54:51.223007   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:54:51.235174   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:54:51.235203   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:54:51.260985   64902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:51.261010   64902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:54:51.283555   64902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:54:52.285657   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095501296s)
	I0404 22:54:52.285706   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062659931s)
	I0404 22:54:52.285731   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285744   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.285755   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.285767   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286091   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286108   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286118   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286128   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.286218   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.286282   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.286294   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.286306   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.286328   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.288015   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288022   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.288013   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288092   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.288106   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.288149   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.293937   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.293962   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.294199   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.294243   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.294254   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320063   64902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.036455618s)
	I0404 22:54:52.320206   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320223   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320525   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320537   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320546   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320559   64902 main.go:141] libmachine: Making call to close driver server
	I0404 22:54:52.320568   64902 main.go:141] libmachine: (embed-certs-143118) Calling .Close
	I0404 22:54:52.320821   64902 main.go:141] libmachine: (embed-certs-143118) DBG | Closing plugin on server side
	I0404 22:54:52.320853   64902 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:54:52.320860   64902 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:54:52.320870   64902 addons.go:470] Verifying addon metrics-server=true in "embed-certs-143118"
	I0404 22:54:52.323920   64902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 22:54:50.405817   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:50.406204   65047 main.go:141] libmachine: (no-preload-024416) DBG | unable to find current IP address of domain no-preload-024416 in network mk-no-preload-024416
	I0404 22:54:50.406240   65047 main.go:141] libmachine: (no-preload-024416) DBG | I0404 22:54:50.406163   65995 retry.go:31] will retry after 3.600637009s: waiting for machine to come up
	I0404 22:54:55.277382   65393 start.go:364] duration metric: took 3m58.692301923s to acquireMachinesLock for "old-k8s-version-343162"
	I0404 22:54:55.277485   65393 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:54:55.277502   65393 fix.go:54] fixHost starting: 
	I0404 22:54:55.277878   65393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:54:55.277920   65393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:54:55.297525   65393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0404 22:54:55.297963   65393 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:54:55.298433   65393 main.go:141] libmachine: Using API Version  1
	I0404 22:54:55.298456   65393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:54:55.298792   65393 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:54:55.298962   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:54:55.299129   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetState
	I0404 22:54:55.300682   65393 fix.go:112] recreateIfNeeded on old-k8s-version-343162: state=Stopped err=<nil>
	I0404 22:54:55.300717   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	W0404 22:54:55.300937   65393 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:54:55.303925   65393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-343162" ...
	I0404 22:54:52.325280   64902 addons.go:505] duration metric: took 1.407646741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 22:54:53.132199   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.136881   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:55.305412   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .Start
	I0404 22:54:55.305607   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring networks are active...
	I0404 22:54:55.306344   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network default is active
	I0404 22:54:55.306786   65393 main.go:141] libmachine: (old-k8s-version-343162) Ensuring network mk-old-k8s-version-343162 is active
	I0404 22:54:55.307281   65393 main.go:141] libmachine: (old-k8s-version-343162) Getting domain xml...
	I0404 22:54:55.308086   65393 main.go:141] libmachine: (old-k8s-version-343162) Creating domain...
	I0404 22:54:54.010930   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011401   65047 main.go:141] libmachine: (no-preload-024416) Found IP for machine: 192.168.50.77
	I0404 22:54:54.011426   65047 main.go:141] libmachine: (no-preload-024416) Reserving static IP address...
	I0404 22:54:54.011444   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has current primary IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.011871   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.011896   65047 main.go:141] libmachine: (no-preload-024416) DBG | skip adding static IP to network mk-no-preload-024416 - found existing host DHCP lease matching {name: "no-preload-024416", mac: "52:54:00:9b:35:e3", ip: "192.168.50.77"}
	I0404 22:54:54.011924   65047 main.go:141] libmachine: (no-preload-024416) Reserved static IP address: 192.168.50.77
	I0404 22:54:54.011942   65047 main.go:141] libmachine: (no-preload-024416) Waiting for SSH to be available...
	I0404 22:54:54.011956   65047 main.go:141] libmachine: (no-preload-024416) DBG | Getting to WaitForSSH function...
	I0404 22:54:54.014714   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015164   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.015190   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.015357   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH client type: external
	I0404 22:54:54.015401   65047 main.go:141] libmachine: (no-preload-024416) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa (-rw-------)
	I0404 22:54:54.015469   65047 main.go:141] libmachine: (no-preload-024416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:54:54.015495   65047 main.go:141] libmachine: (no-preload-024416) DBG | About to run SSH command:
	I0404 22:54:54.015513   65047 main.go:141] libmachine: (no-preload-024416) DBG | exit 0
	I0404 22:54:54.140293   65047 main.go:141] libmachine: (no-preload-024416) DBG | SSH cmd err, output: <nil>: 
	I0404 22:54:54.140713   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetConfigRaw
	I0404 22:54:54.141290   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.144062   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144446   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.144476   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.144752   65047 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/config.json ...
	I0404 22:54:54.144967   65047 machine.go:94] provisionDockerMachine start ...
	I0404 22:54:54.144984   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:54.145210   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.147699   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148016   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.148046   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.148244   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.148430   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148571   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.148704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.148878   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.149071   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.149082   65047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:54:54.256984   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:54:54.257021   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257324   65047 buildroot.go:166] provisioning hostname "no-preload-024416"
	I0404 22:54:54.257354   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.257530   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.259995   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260339   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.260369   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.260515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.260709   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260862   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.260986   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.261172   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.261348   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.261360   65047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-024416 && echo "no-preload-024416" | sudo tee /etc/hostname
	I0404 22:54:54.383376   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-024416
	
	I0404 22:54:54.383401   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.386482   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.386993   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.387035   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.387179   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.387368   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387515   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.387705   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.387954   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.388225   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.388258   65047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-024416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-024416/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-024416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:54:54.506307   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:54:54.506341   65047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:54:54.506358   65047 buildroot.go:174] setting up certificates
	I0404 22:54:54.506366   65047 provision.go:84] configureAuth start
	I0404 22:54:54.506375   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetMachineName
	I0404 22:54:54.506653   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:54.509737   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510146   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.510177   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.510350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.512864   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513212   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.513244   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.513339   65047 provision.go:143] copyHostCerts
	I0404 22:54:54.513391   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:54:54.513410   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:54:54.513464   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:54:54.513547   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:54:54.513559   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:54:54.513579   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:54:54.513642   65047 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:54:54.513653   65047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:54:54.513672   65047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:54:54.513728   65047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.no-preload-024416 san=[127.0.0.1 192.168.50.77 localhost minikube no-preload-024416]
	I0404 22:54:54.574205   65047 provision.go:177] copyRemoteCerts
	I0404 22:54:54.574261   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:54:54.574292   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.577012   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577382   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.577417   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.577576   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.577750   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.577913   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.578009   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:54.662795   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:54:54.688507   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:54:54.714322   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0404 22:54:54.743237   65047 provision.go:87] duration metric: took 236.859201ms to configureAuth
	I0404 22:54:54.743277   65047 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:54:54.743504   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:54:54.743573   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:54.746762   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747249   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:54.747277   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:54.747454   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:54.747656   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747791   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:54.747912   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:54.748073   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:54.748321   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:54.748342   65047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:54:55.023000   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:54:55.023025   65047 machine.go:97] duration metric: took 878.045187ms to provisionDockerMachine
	I0404 22:54:55.023038   65047 start.go:293] postStartSetup for "no-preload-024416" (driver="kvm2")
	I0404 22:54:55.023052   65047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:54:55.023072   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.023459   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:54:55.023493   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.026491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.026914   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.026938   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.027162   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.027350   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.027490   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.027627   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.111497   65047 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:54:55.116152   65047 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:54:55.116178   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:54:55.116260   65047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:54:55.116331   65047 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:54:55.116412   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:54:55.126233   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:54:55.159228   65047 start.go:296] duration metric: took 136.175727ms for postStartSetup
	I0404 22:54:55.159267   65047 fix.go:56] duration metric: took 20.217610648s for fixHost
	I0404 22:54:55.159286   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.162009   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162439   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.162469   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.162665   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.162941   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163119   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.163353   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.163531   65047 main.go:141] libmachine: Using SSH client type: native
	I0404 22:54:55.163697   65047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0404 22:54:55.163707   65047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:54:55.277216   65047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271295.257019839
	
	I0404 22:54:55.277241   65047 fix.go:216] guest clock: 1712271295.257019839
	I0404 22:54:55.277254   65047 fix.go:229] Guest: 2024-04-04 22:54:55.257019839 +0000 UTC Remote: 2024-04-04 22:54:55.159270151 +0000 UTC m=+291.659154910 (delta=97.749688ms)
	I0404 22:54:55.277281   65047 fix.go:200] guest clock delta is within tolerance: 97.749688ms
	I0404 22:54:55.277308   65047 start.go:83] releasing machines lock for "no-preload-024416", held for 20.335673292s
	I0404 22:54:55.277345   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.277650   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:55.280507   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.280968   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.280998   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.281137   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281654   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281845   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:54:55.281930   65047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:54:55.281967   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.282068   65047 ssh_runner.go:195] Run: cat /version.json
	I0404 22:54:55.282085   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:54:55.284662   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.284975   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285007   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285034   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285217   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285379   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.285469   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:55.285491   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:55.285542   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.285677   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:54:55.285722   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.286037   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:54:55.286197   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:54:55.286343   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:54:55.404469   65047 ssh_runner.go:195] Run: systemctl --version
	I0404 22:54:55.411104   65047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:54:55.570700   65047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:54:55.577807   65047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:54:55.577879   65047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:54:55.595095   65047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:54:55.595115   65047 start.go:494] detecting cgroup driver to use...
	I0404 22:54:55.595180   65047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:54:55.611801   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:54:55.626402   65047 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:54:55.626460   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:54:55.642719   65047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:54:55.663140   65047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:54:55.784155   65047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:54:55.933709   65047 docker.go:233] disabling docker service ...
	I0404 22:54:55.933782   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:54:55.951743   65047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:54:55.966349   65047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:54:56.129621   65047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:54:56.295246   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:54:56.312815   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:54:56.334486   65047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:54:56.334554   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.351987   65047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:54:56.352067   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.368799   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.390239   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.409407   65047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:54:56.421785   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.434016   65047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.457813   65047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:54:56.474509   65047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:54:56.489126   65047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:54:56.489186   65047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:54:56.509461   65047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:54:56.523048   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:54:56.708034   65047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:54:56.860854   65047 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:54:56.860931   65047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:54:56.867310   65047 start.go:562] Will wait 60s for crictl version
	I0404 22:54:56.867380   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:56.871431   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:54:56.911015   65047 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:54:56.911096   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.942313   65047 ssh_runner.go:195] Run: crio --version
	I0404 22:54:56.985247   65047 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0404 22:54:56.986628   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetIP
	I0404 22:54:56.990377   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.990775   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:54:56.990805   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:54:56.991069   65047 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0404 22:54:56.996831   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:54:57.013429   65047 kubeadm.go:877] updating cluster {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:54:57.013580   65047 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 22:54:57.013630   65047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:54:57.066460   65047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0404 22:54:57.066491   65047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:54:57.066546   65047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.066569   65047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.066598   65047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.066677   65047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.066674   65047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.066703   65047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.066755   65047 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0404 22:54:57.066974   65047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068510   65047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.068579   65047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.068590   65047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.068603   65047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.068648   65047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.068516   65047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.068666   65047 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0404 22:54:57.068727   65047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.282811   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.319459   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.329131   65047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0404 22:54:57.329170   65047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.329216   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.334176   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.395259   65047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0404 22:54:57.395307   65047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.395333   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0404 22:54:57.395348   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.406668   65047 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0404 22:54:57.406719   65047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.406769   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.428655   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0404 22:54:57.430830   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.434683   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.439273   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439326   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0404 22:54:57.439406   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.439428   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0404 22:54:57.565067   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690361   65047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0404 22:54:57.690402   65047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.690456   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690478   65047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0404 22:54:57.690509   65047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.690556   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.690563   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690608   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0404 22:54:57.690635   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0404 22:54:57.690653   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690669   65047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0404 22:54:57.690702   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0404 22:54:57.690737   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:57.690702   65047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:57.690667   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:54:57.690772   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:54:57.702038   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0404 22:54:57.703294   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0404 22:54:57.703314   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0404 22:54:57.861311   65047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:57.632801   64902 node_ready.go:53] node "embed-certs-143118" has status "Ready":"False"
	I0404 22:54:58.632291   64902 node_ready.go:49] node "embed-certs-143118" has status "Ready":"True"
	I0404 22:54:58.632328   64902 node_ready.go:38] duration metric: took 7.504264997s for node "embed-certs-143118" to be "Ready" ...
	I0404 22:54:58.632341   64902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:54:58.640851   64902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652075   64902 pod_ready.go:92] pod "coredns-76f75df574-9qh9s" in "kube-system" namespace has status "Ready":"True"
	I0404 22:54:58.652100   64902 pod_ready.go:81] duration metric: took 11.215741ms for pod "coredns-76f75df574-9qh9s" in "kube-system" namespace to be "Ready" ...
	I0404 22:54:58.652114   64902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:00.659219   64902 pod_ready.go:102] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"False"
	I0404 22:54:56.653772   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting to get IP...
	I0404 22:54:56.654708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.655215   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.655284   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.655205   66200 retry.go:31] will retry after 274.074964ms: waiting for machine to come up
	I0404 22:54:56.930932   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:56.931386   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:56.931451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:56.931345   66200 retry.go:31] will retry after 260.84968ms: waiting for machine to come up
	I0404 22:54:57.193886   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.194346   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.194368   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.194299   66200 retry.go:31] will retry after 334.40821ms: waiting for machine to come up
	I0404 22:54:57.530137   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:57.530694   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:57.530742   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:57.530631   66200 retry.go:31] will retry after 514.498594ms: waiting for machine to come up
	I0404 22:54:58.046394   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.046985   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.047025   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.046905   66200 retry.go:31] will retry after 466.899368ms: waiting for machine to come up
	I0404 22:54:58.515516   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:58.516090   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:58.516224   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:58.516149   66200 retry.go:31] will retry after 670.74835ms: waiting for machine to come up
	I0404 22:54:59.187992   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:54:59.188549   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:54:59.188579   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:54:59.188496   66200 retry.go:31] will retry after 849.489739ms: waiting for machine to come up
	I0404 22:55:00.039689   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:00.040255   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:00.040290   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:00.040190   66200 retry.go:31] will retry after 1.450679545s: waiting for machine to come up
	I0404 22:54:59.998625   65047 ssh_runner.go:235] Completed: which crictl: (2.307831694s)
	I0404 22:54:59.998689   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.307893929s)
	I0404 22:54:59.998766   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0404 22:54:59.998771   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0: (2.296702121s)
	I0404 22:54:59.998810   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0: (2.295483217s)
	I0404 22:54:59.998704   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0404 22:54:59.998853   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998720   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.307994873s)
	I0404 22:54:59.998819   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.998930   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0404 22:54:59.998951   65047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998854   65047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.137514139s)
	I0404 22:54:59.998961   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:54:59.998990   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0404 22:54:59.998992   65047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0404 22:54:59.998998   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:54:59.999027   65047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:54:59.999070   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:55:00.009724   65047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:00.009945   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0404 22:55:01.660284   64902 pod_ready.go:92] pod "etcd-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.660348   64902 pod_ready.go:81] duration metric: took 3.008175771s for pod "etcd-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.660362   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666661   64902 pod_ready.go:92] pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.666687   64902 pod_ready.go:81] duration metric: took 6.316138ms for pod "kube-apiserver-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.666701   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672891   64902 pod_ready.go:92] pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.672915   64902 pod_ready.go:81] duration metric: took 6.205783ms for pod "kube-controller-manager-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.672928   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679237   64902 pod_ready.go:92] pod "kube-proxy-psst7" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.679268   64902 pod_ready.go:81] duration metric: took 6.332124ms for pod "kube-proxy-psst7" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.679280   64902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833788   64902 pod_ready.go:92] pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:01.833817   64902 pod_ready.go:81] duration metric: took 154.528833ms for pod "kube-scheduler-embed-certs-143118" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:01.833831   64902 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:03.841405   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:05.842970   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:01.492870   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:01.493414   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:01.493440   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:01.493366   66200 retry.go:31] will retry after 1.844779665s: waiting for machine to come up
	I0404 22:55:03.340065   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:03.340708   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:03.340739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:03.340650   66200 retry.go:31] will retry after 1.954275124s: waiting for machine to come up
	I0404 22:55:05.297408   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:05.297938   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:05.297986   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:05.297876   66200 retry.go:31] will retry after 1.771664796s: waiting for machine to come up
	I0404 22:55:04.067188   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.068174109s)
	I0404 22:55:04.067285   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0404 22:55:04.067284   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.057537661s)
	I0404 22:55:04.067312   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067322   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0404 22:55:04.067244   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (4.068231608s)
	I0404 22:55:04.067203   65047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (4.068350332s)
	I0404 22:55:04.067371   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0404 22:55:04.067380   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0404 22:55:04.067395   65047 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0404 22:55:04.067414   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:04.067486   65047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:06.758753   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.691348313s)
	I0404 22:55:06.758790   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0404 22:55:06.758816   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:06.758812   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.691301898s)
	I0404 22:55:06.758845   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0404 22:55:06.758850   65047 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.691417107s)
	I0404 22:55:06.758876   65047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0404 22:55:06.758884   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0404 22:55:07.843038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:10.342614   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:07.072194   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:07.072586   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:07.072614   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:07.072532   66200 retry.go:31] will retry after 2.503470191s: waiting for machine to come up
	I0404 22:55:09.577312   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:09.577837   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | unable to find current IP address of domain old-k8s-version-343162 in network mk-old-k8s-version-343162
	I0404 22:55:09.577877   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | I0404 22:55:09.577785   66200 retry.go:31] will retry after 3.900843515s: waiting for machine to come up
	I0404 22:55:09.231002   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.472089065s)
	I0404 22:55:09.231033   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0404 22:55:09.231064   65047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:09.231114   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0404 22:55:10.787933   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.556787731s)
	I0404 22:55:10.788017   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0404 22:55:10.788087   65047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:10.788163   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0404 22:55:12.668739   65047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880546401s)
	I0404 22:55:12.668772   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0404 22:55:12.668806   65047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:12.668877   65047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0404 22:55:15.073701   64791 start.go:364] duration metric: took 55.863036918s to acquireMachinesLock for "default-k8s-diff-port-952083"
	I0404 22:55:15.073768   64791 start.go:96] Skipping create...Using existing machine configuration
	I0404 22:55:15.073780   64791 fix.go:54] fixHost starting: 
	I0404 22:55:15.074227   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:15.074264   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:15.094449   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0404 22:55:15.094905   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:15.095422   64791 main.go:141] libmachine: Using API Version  1
	I0404 22:55:15.095446   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:15.095809   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:15.096067   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:15.096216   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 22:55:15.097752   64791 fix.go:112] recreateIfNeeded on default-k8s-diff-port-952083: state=Stopped err=<nil>
	I0404 22:55:15.097779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	W0404 22:55:15.097988   64791 fix.go:138] unexpected machine state, will restart: <nil>
	I0404 22:55:15.100278   64791 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-952083" ...
	I0404 22:55:12.840463   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:14.841903   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:13.482797   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483264   65393 main.go:141] libmachine: (old-k8s-version-343162) Found IP for machine: 192.168.39.247
	I0404 22:55:13.483286   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has current primary IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.483293   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserving static IP address...
	I0404 22:55:13.483713   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.483753   65393 main.go:141] libmachine: (old-k8s-version-343162) Reserved static IP address: 192.168.39.247
	I0404 22:55:13.483776   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | skip adding static IP to network mk-old-k8s-version-343162 - found existing host DHCP lease matching {name: "old-k8s-version-343162", mac: "52:54:00:74:db:c6", ip: "192.168.39.247"}
	I0404 22:55:13.483790   65393 main.go:141] libmachine: (old-k8s-version-343162) Waiting for SSH to be available...
	I0404 22:55:13.483810   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Getting to WaitForSSH function...
	I0404 22:55:13.485889   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486260   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.486303   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.486379   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH client type: external
	I0404 22:55:13.486411   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa (-rw-------)
	I0404 22:55:13.486451   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:13.486465   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | About to run SSH command:
	I0404 22:55:13.486493   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | exit 0
	I0404 22:55:13.616680   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:13.617056   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetConfigRaw
	I0404 22:55:13.617819   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:13.620441   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.620959   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.620987   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.621259   65393 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/config.json ...
	I0404 22:55:13.621490   65393 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:13.621512   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:13.621715   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.624353   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.624739   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.624769   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.625019   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.625218   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625392   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.625540   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.625730   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.625987   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.626005   65393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:13.745162   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:13.745198   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745533   65393 buildroot.go:166] provisioning hostname "old-k8s-version-343162"
	I0404 22:55:13.745558   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:13.745773   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.748881   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749270   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.749304   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.749393   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.749619   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749787   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.749982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.750304   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.750599   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.750624   65393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-343162 && echo "old-k8s-version-343162" | sudo tee /etc/hostname
	I0404 22:55:13.887895   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-343162
	
	I0404 22:55:13.887942   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:13.891149   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891606   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:13.891657   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:13.891985   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:13.892226   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892464   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:13.892629   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:13.892839   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:13.893110   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:13.893139   65393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-343162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-343162/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-343162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:14.026612   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:14.026651   65393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:14.026677   65393 buildroot.go:174] setting up certificates
	I0404 22:55:14.026691   65393 provision.go:84] configureAuth start
	I0404 22:55:14.026702   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetMachineName
	I0404 22:55:14.026996   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:14.030366   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.030831   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.030869   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.031171   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.033684   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034074   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.034115   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.034266   65393 provision.go:143] copyHostCerts
	I0404 22:55:14.034319   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:14.034331   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:14.034384   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:14.034471   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:14.034479   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:14.034498   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:14.034559   65393 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:14.034567   65393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:14.034585   65393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:14.034633   65393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-343162 san=[127.0.0.1 192.168.39.247 localhost minikube old-k8s-version-343162]
	I0404 22:55:14.305753   65393 provision.go:177] copyRemoteCerts
	I0404 22:55:14.305805   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:14.305830   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.308454   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308772   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.308812   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.308940   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.309140   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.309315   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.309461   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.399449   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:14.431583   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0404 22:55:14.459757   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0404 22:55:14.489959   65393 provision.go:87] duration metric: took 463.256669ms to configureAuth
	I0404 22:55:14.489999   65393 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:14.490222   65393 config.go:182] Loaded profile config "old-k8s-version-343162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:55:14.490314   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.493316   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493721   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.493746   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.493935   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.494154   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494339   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.494495   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.494669   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.494915   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.494940   65393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:14.813278   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:14.813306   65393 machine.go:97] duration metric: took 1.19180289s to provisionDockerMachine
	I0404 22:55:14.813318   65393 start.go:293] postStartSetup for "old-k8s-version-343162" (driver="kvm2")
	I0404 22:55:14.813328   65393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:14.813365   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:14.813712   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:14.813739   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.817108   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817543   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.817577   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.817770   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.817970   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.818192   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.818371   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:14.908922   65393 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:14.914161   65393 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:14.914190   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:14.914279   65393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:14.914388   65393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:14.914522   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:14.924829   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:14.952058   65393 start.go:296] duration metric: took 138.725667ms for postStartSetup
	I0404 22:55:14.952103   65393 fix.go:56] duration metric: took 19.674601379s for fixHost
	I0404 22:55:14.952151   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:14.954833   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955233   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:14.955273   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:14.955501   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:14.955712   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.955866   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:14.956022   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:14.956189   65393 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:14.956355   65393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0404 22:55:14.956367   65393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:15.073532   65393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271315.019545661
	
	I0404 22:55:15.073560   65393 fix.go:216] guest clock: 1712271315.019545661
	I0404 22:55:15.073569   65393 fix.go:229] Guest: 2024-04-04 22:55:15.019545661 +0000 UTC Remote: 2024-04-04 22:55:14.952108062 +0000 UTC m=+258.528860140 (delta=67.437599ms)
	I0404 22:55:15.073595   65393 fix.go:200] guest clock delta is within tolerance: 67.437599ms
	I0404 22:55:15.073603   65393 start.go:83] releasing machines lock for "old-k8s-version-343162", held for 19.79616835s
	I0404 22:55:15.073645   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.073982   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:15.077001   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077416   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.077464   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.077684   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078280   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078473   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .DriverName
	I0404 22:55:15.078552   65393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:15.078592   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.078687   65393 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:15.078714   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHHostname
	I0404 22:55:15.081320   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081704   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.081724   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081752   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.081995   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082190   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082212   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:15.082249   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:15.082366   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082519   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHPort
	I0404 22:55:15.082557   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.082639   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHKeyPath
	I0404 22:55:15.082782   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetSSHUsername
	I0404 22:55:15.082912   65393 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/old-k8s-version-343162/id_rsa Username:docker}
	I0404 22:55:15.165958   65393 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:15.206476   65393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:15.356625   65393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:15.364893   65393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:15.364973   65393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:15.387634   65393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:15.387663   65393 start.go:494] detecting cgroup driver to use...
	I0404 22:55:15.387728   65393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:15.408455   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:15.427753   65393 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:15.427812   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:15.443713   65393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:15.462657   65393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:15.611611   65393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:15.799003   65393 docker.go:233] disabling docker service ...
	I0404 22:55:15.799058   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:15.819716   65393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:15.838428   65393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:15.998060   65393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:16.143821   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:16.162284   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:16.185030   65393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0404 22:55:16.185124   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.201354   65393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:16.201422   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.214191   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.232995   65393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:16.246872   65393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:16.260730   65393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:16.272532   65393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:16.272601   65393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:16.289516   65393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:16.306546   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:16.469991   65393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:15.101770   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Start
	I0404 22:55:15.101942   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring networks are active...
	I0404 22:55:15.102800   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network default is active
	I0404 22:55:15.103162   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Ensuring network mk-default-k8s-diff-port-952083 is active
	I0404 22:55:15.103685   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Getting domain xml...
	I0404 22:55:15.104598   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Creating domain...
	I0404 22:55:16.462772   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting to get IP...
	I0404 22:55:16.463782   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.464380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.464303   66369 retry.go:31] will retry after 209.546798ms: waiting for machine to come up
	I0404 22:55:16.618533   65393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:16.618609   65393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:16.623677   65393 start.go:562] Will wait 60s for crictl version
	I0404 22:55:16.623740   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:16.627698   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:16.667097   65393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:16.667195   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.700780   65393 ssh_runner.go:195] Run: crio --version
	I0404 22:55:16.739287   65393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0404 22:55:13.614563   65047 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0404 22:55:13.614614   65047 cache_images.go:123] Successfully loaded all cached images
	I0404 22:55:13.614620   65047 cache_images.go:92] duration metric: took 16.548112387s to LoadCachedImages
	I0404 22:55:13.614638   65047 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.0-rc.0 crio true true} ...
	I0404 22:55:13.614766   65047 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-024416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:13.614859   65047 ssh_runner.go:195] Run: crio config
	I0404 22:55:13.670321   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:13.670352   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:13.670368   65047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:13.670397   65047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-024416 NodeName:no-preload-024416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:13.670593   65047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-024416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:13.670688   65047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0404 22:55:13.683863   65047 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:13.683944   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:13.694995   65047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0404 22:55:13.717964   65047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0404 22:55:13.738088   65047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0404 22:55:13.762413   65047 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:13.767294   65047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:13.783621   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:13.925851   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:13.946583   65047 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416 for IP: 192.168.50.77
	I0404 22:55:13.946612   65047 certs.go:194] generating shared ca certs ...
	I0404 22:55:13.946632   65047 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:13.946873   65047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:13.946932   65047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:13.946946   65047 certs.go:256] generating profile certs ...
	I0404 22:55:13.947038   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/client.key
	I0404 22:55:13.947126   65047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key.8f9148e7
	I0404 22:55:13.947183   65047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key
	I0404 22:55:13.947338   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:13.947388   65047 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:13.947402   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:13.947442   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:13.947480   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:13.947501   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:13.947538   65047 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:13.948161   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:13.987518   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:14.023442   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:14.054901   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:14.096855   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0404 22:55:14.140589   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:14.171957   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:14.200219   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/no-preload-024416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0404 22:55:14.228332   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:14.255955   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:14.281822   65047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:14.310385   65047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:14.330943   65047 ssh_runner.go:195] Run: openssl version
	I0404 22:55:14.337412   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:14.350470   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355351   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.355405   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:14.361799   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:14.374526   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:14.388775   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394578   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.394643   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:14.401888   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:14.415796   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:14.431180   65047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437443   65047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.437498   65047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:14.444439   65047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:14.458554   65047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:14.463877   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:14.471842   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:14.478423   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:14.485289   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:14.491991   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:14.499145   65047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:14.505734   65047 kubeadm.go:391] StartCluster: {Name:no-preload-024416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-024416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:14.505842   65047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:14.505916   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.552731   65047 cri.go:89] found id: ""
	I0404 22:55:14.552818   65047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:14.564111   65047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:14.564146   65047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:14.564153   65047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:14.564206   65047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:14.577068   65047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:14.578074   65047 kubeconfig.go:125] found "no-preload-024416" server: "https://192.168.50.77:8443"
	I0404 22:55:14.579998   65047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:14.592375   65047 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.77
	I0404 22:55:14.592404   65047 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:14.592415   65047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:14.592459   65047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:14.638880   65047 cri.go:89] found id: ""
	I0404 22:55:14.638970   65047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:14.663010   65047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:14.675814   65047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:14.675853   65047 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:14.675900   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:14.688308   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:14.688375   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:14.702880   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:14.716656   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:14.716728   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:14.728605   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.739218   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:14.739283   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:14.750108   65047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:14.761607   65047 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:14.761671   65047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:14.774056   65047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:14.787939   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:14.909162   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.334914   65047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.425719775s)
	I0404 22:55:16.334945   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.609889   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.686278   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:16.774460   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:16.774563   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.274803   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:17.775665   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.274707   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:18.298277   65047 api_server.go:72] duration metric: took 1.523817264s to wait for apiserver process to appear ...
	I0404 22:55:18.298307   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:18.298329   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:18.298961   65047 api_server.go:269] stopped: https://192.168.50.77:8443/healthz: Get "https://192.168.50.77:8443/healthz": dial tcp 192.168.50.77:8443: connect: connection refused
	I0404 22:55:16.842824   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:18.845888   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:21.344260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:16.740949   65393 main.go:141] libmachine: (old-k8s-version-343162) Calling .GetIP
	I0404 22:55:16.744057   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744491   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:c6", ip: ""} in network mk-old-k8s-version-343162: {Iface:virbr1 ExpiryTime:2024-04-04 23:55:07 +0000 UTC Type:0 Mac:52:54:00:74:db:c6 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:old-k8s-version-343162 Clientid:01:52:54:00:74:db:c6}
	I0404 22:55:16.744533   65393 main.go:141] libmachine: (old-k8s-version-343162) DBG | domain old-k8s-version-343162 has defined IP address 192.168.39.247 and MAC address 52:54:00:74:db:c6 in network mk-old-k8s-version-343162
	I0404 22:55:16.744749   65393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:16.750820   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:16.769289   65393 kubeadm.go:877] updating cluster {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:16.769467   65393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 22:55:16.769531   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:16.827000   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:16.827077   65393 ssh_runner.go:195] Run: which lz4
	I0404 22:55:16.833494   65393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:16.839898   65393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:16.839972   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0404 22:55:18.896288   65393 crio.go:462] duration metric: took 2.062838778s to copy over tarball
	I0404 22:55:18.896369   65393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:16.676096   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.676944   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.676858   66369 retry.go:31] will retry after 272.178949ms: waiting for machine to come up
	I0404 22:55:16.951171   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:16.951771   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:16.951666   66369 retry.go:31] will retry after 296.205822ms: waiting for machine to come up
	I0404 22:55:17.249449   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250006   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.250039   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.249901   66369 retry.go:31] will retry after 395.504604ms: waiting for machine to come up
	I0404 22:55:17.647678   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:17.648471   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:17.648337   66369 retry.go:31] will retry after 465.589308ms: waiting for machine to come up
	I0404 22:55:18.116437   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117692   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.117740   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.117645   66369 retry.go:31] will retry after 763.715105ms: waiting for machine to come up
	I0404 22:55:18.883468   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884148   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:18.884214   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:18.884004   66369 retry.go:31] will retry after 1.098461705s: waiting for machine to come up
	I0404 22:55:19.984522   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985080   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:19.985111   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:19.985029   66369 retry.go:31] will retry after 1.20224728s: waiting for machine to come up
	I0404 22:55:21.188813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:21.189372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:21.189284   66369 retry.go:31] will retry after 1.828752424s: waiting for machine to come up
	I0404 22:55:18.798849   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.231226   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.231258   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.231274   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.260339   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.260380   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.298635   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.330231   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:21.330264   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:21.798461   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:21.808690   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:21.808726   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.299332   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.305181   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.305213   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:22.798717   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:22.818393   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:22.818438   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.298741   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.305958   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.305991   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:23.798813   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:23.804261   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:23.804296   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.298487   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.304891   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0404 22:55:24.304928   65047 api_server.go:103] status: https://192.168.50.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0404 22:55:24.798523   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:55:24.804212   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:55:24.811974   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:55:24.811999   65047 api_server.go:131] duration metric: took 6.513685326s to wait for apiserver health ...
	I0404 22:55:24.812008   65047 cni.go:84] Creating CNI manager for ""
	I0404 22:55:24.812014   65047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:24.814353   65047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:23.841622   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:25.841739   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:22.459272   65393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562873515s)
	I0404 22:55:22.459298   65393 crio.go:469] duration metric: took 3.56298059s to extract the tarball
	I0404 22:55:22.459305   65393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:22.506283   65393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:22.545033   65393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0404 22:55:22.545060   65393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0404 22:55:22.545126   65393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.545137   65393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.545173   65393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.545196   65393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.545238   65393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.545302   65393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.545208   65393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0404 22:55:22.545446   65393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.546976   65393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0404 22:55:22.547023   65393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.547031   65393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.546980   65393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.547034   65393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:22.547073   65393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.547003   65393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.547012   65393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.771721   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0404 22:55:22.774797   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.775291   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.776362   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:22.785066   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0404 22:55:22.785465   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:22.787095   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:22.935936   65393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0404 22:55:22.935986   65393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0404 22:55:22.936032   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.978411   65393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0404 22:55:22.978460   65393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:22.978521   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:22.988007   65393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0404 22:55:22.988053   65393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:22.988095   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001263   65393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0404 22:55:23.001296   65393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0404 22:55:23.001325   65393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.001356   65393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.001373   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.001400   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007053   65393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0404 22:55:23.007107   65393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.007111   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0404 22:55:23.007115   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0404 22:55:23.007135   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.007071   65393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0404 22:55:23.007139   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0404 22:55:23.007170   65393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.007207   65393 ssh_runner.go:195] Run: which crictl
	I0404 22:55:23.009469   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0404 22:55:23.010460   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0404 22:55:23.144982   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0404 22:55:23.145036   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0404 22:55:23.149847   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0404 22:55:23.149886   65393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0404 22:55:23.149961   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0404 22:55:23.149931   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0404 22:55:23.210612   65393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0404 22:55:23.386832   65393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.542789   65393 cache_images.go:92] duration metric: took 997.708569ms to LoadCachedImages
	W0404 22:55:23.542901   65393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16143-5297/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0404 22:55:23.542928   65393 kubeadm.go:928] updating node { 192.168.39.247 8443 v1.20.0 crio true true} ...
	I0404 22:55:23.543082   65393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-343162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:23.543199   65393 ssh_runner.go:195] Run: crio config
	I0404 22:55:23.604999   65393 cni.go:84] Creating CNI manager for ""
	I0404 22:55:23.605023   65393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:23.605035   65393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:23.605057   65393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-343162 NodeName:old-k8s-version-343162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0404 22:55:23.605247   65393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-343162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:23.605324   65393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0404 22:55:23.618943   65393 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:23.619021   65393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:23.631449   65393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0404 22:55:23.653570   65393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:23.674444   65393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0404 22:55:23.696627   65393 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:23.701248   65393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:23.715979   65393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:23.872589   65393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:23.893271   65393 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162 for IP: 192.168.39.247
	I0404 22:55:23.893302   65393 certs.go:194] generating shared ca certs ...
	I0404 22:55:23.893323   65393 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:23.893508   65393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:23.893573   65393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:23.893591   65393 certs.go:256] generating profile certs ...
	I0404 22:55:23.893703   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/client.key
	I0404 22:55:23.893789   65393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key.184368d7
	I0404 22:55:23.893848   65393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key
	I0404 22:55:23.894013   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:23.894060   65393 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:23.894075   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:23.894119   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:23.894152   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:23.894184   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:23.894283   65393 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:23.895146   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:23.930844   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:23.970129   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:24.012084   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:24.061026   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0404 22:55:24.099215   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0404 22:55:24.144924   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:24.179027   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/old-k8s-version-343162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:24.214467   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:24.261693   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:24.294049   65393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:24.330228   65393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:24.357217   65393 ssh_runner.go:195] Run: openssl version
	I0404 22:55:24.364368   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:24.377630   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384429   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.384493   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:24.392806   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:24.409360   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:24.425575   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431294   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.431359   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:24.438465   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:24.452037   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:24.467913   65393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473901   65393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.473964   65393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:24.482161   65393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:24.498553   65393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:24.505506   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:24.513143   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:24.520059   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:24.527197   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:24.534384   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:24.541499   65393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:24.548056   65393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-343162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-343162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:24.548157   65393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:24.548208   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.592650   65393 cri.go:89] found id: ""
	I0404 22:55:24.592732   65393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:24.605071   65393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:24.605101   65393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:24.605107   65393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:24.605167   65393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:24.616615   65393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:24.617680   65393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-343162" does not appear in /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:24.618355   65393 kubeconfig.go:62] /home/jenkins/minikube-integration/16143-5297/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-343162" cluster setting kubeconfig missing "old-k8s-version-343162" context setting]
	I0404 22:55:24.619375   65393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:24.621522   65393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:24.632596   65393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.247
	I0404 22:55:24.632645   65393 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:24.632660   65393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:24.632717   65393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:24.683167   65393 cri.go:89] found id: ""
	I0404 22:55:24.683241   65393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:24.703840   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:24.717504   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:24.717524   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:24.717579   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:55:24.729845   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:24.729916   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:24.741544   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:55:24.755004   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:24.755081   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:24.769007   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.782305   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:24.782371   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:24.792627   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:55:24.802737   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:24.802806   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:24.814766   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:24.825422   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.006245   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.377950   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.371665115s)
	I0404 22:55:26.377988   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:24.816240   65047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:24.831576   65047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:24.861844   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:24.873557   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:24.873598   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:24.873616   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:24.873627   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:24.873642   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:24.873650   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:24.873661   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:24.873669   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:24.873677   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:24.873690   65047 system_pods.go:74] duration metric: took 11.825989ms to wait for pod list to return data ...
	I0404 22:55:24.873703   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:24.879876   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:24.879907   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:24.879922   65047 node_conditions.go:105] duration metric: took 6.209498ms to run NodePressure ...
	I0404 22:55:24.879960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:25.214317   65047 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:25.220926   65047 kubeadm.go:733] kubelet initialised
	I0404 22:55:25.220998   65047 kubeadm.go:734] duration metric: took 6.651112ms waiting for restarted kubelet to initialise ...
	I0404 22:55:25.221013   65047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:25.228414   65047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.245164   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245223   65047 pod_ready.go:81] duration metric: took 16.78145ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.245238   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.245271   65047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.258160   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258192   65047 pod_ready.go:81] duration metric: took 12.905596ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.258205   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "etcd-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.258215   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.265054   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265079   65047 pod_ready.go:81] duration metric: took 6.856081ms for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.265090   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-apiserver-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.265098   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.276639   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276664   65047 pod_ready.go:81] duration metric: took 11.554875ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.276675   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.276684   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:25.665602   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665631   65047 pod_ready.go:81] duration metric: took 388.935811ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:25.665640   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-proxy-zmx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:25.665646   65047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.066199   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066229   65047 pod_ready.go:81] duration metric: took 400.576415ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.066242   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "kube-scheduler-no-preload-024416" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.066252   65047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:26.466681   65047 pod_ready.go:97] node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466715   65047 pod_ready.go:81] duration metric: took 400.447851ms for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:26.466728   65047 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-024416" hosting pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:26.466738   65047 pod_ready.go:38] duration metric: took 1.245712492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:26.466760   65047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 22:55:26.482692   65047 ops.go:34] apiserver oom_adj: -16
	I0404 22:55:26.482712   65047 kubeadm.go:591] duration metric: took 11.918553713s to restartPrimaryControlPlane
	I0404 22:55:26.482721   65047 kubeadm.go:393] duration metric: took 11.977000438s to StartCluster
	I0404 22:55:26.482769   65047 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.482864   65047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:55:26.484995   65047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:26.485328   65047 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 22:55:26.487534   65047 out.go:177] * Verifying Kubernetes components...
	I0404 22:55:26.485383   65047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 22:55:26.485563   65047 config.go:182] Loaded profile config "no-preload-024416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0404 22:55:26.489030   65047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:26.489050   65047 addons.go:69] Setting default-storageclass=true in profile "no-preload-024416"
	I0404 22:55:26.489093   65047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-024416"
	I0404 22:55:26.489056   65047 addons.go:69] Setting metrics-server=true in profile "no-preload-024416"
	I0404 22:55:26.489149   65047 addons.go:234] Setting addon metrics-server=true in "no-preload-024416"
	W0404 22:55:26.489172   65047 addons.go:243] addon metrics-server should already be in state true
	I0404 22:55:26.489211   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489054   65047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-024416"
	I0404 22:55:26.489277   65047 addons.go:234] Setting addon storage-provisioner=true in "no-preload-024416"
	W0404 22:55:26.489290   65047 addons.go:243] addon storage-provisioner should already be in state true
	I0404 22:55:26.489319   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.489539   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489573   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489591   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489641   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.489667   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.489685   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.508693   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0404 22:55:26.509305   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.510142   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.510166   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.510664   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.510832   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.511052   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0404 22:55:26.511503   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.512213   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.512232   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.512619   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.513233   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.513270   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.515412   65047 addons.go:234] Setting addon default-storageclass=true in "no-preload-024416"
	W0404 22:55:26.515459   65047 addons.go:243] addon default-storageclass should already be in state true
	I0404 22:55:26.515498   65047 host.go:66] Checking if "no-preload-024416" exists ...
	I0404 22:55:26.515891   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.515954   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.534673   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0404 22:55:26.535571   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.536148   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.536173   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.536663   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.536896   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.537603   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0404 22:55:26.538883   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.541150   65047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 22:55:26.539561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0404 22:55:26.539868   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.542772   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 22:55:26.542787   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 22:55:26.542805   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.544250   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.544270   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.544593   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.544676   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.545317   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.545356   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.546171   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.546188   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.546711   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.546722   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547227   65047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:55:26.547285   65047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:55:26.547464   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.547499   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.547704   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.547905   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.548076   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.548259   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.564561   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0404 22:55:26.565127   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.565730   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.565757   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.566293   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.566516   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.567998   65047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0404 22:55:26.568463   65047 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:55:26.568520   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.569025   65047 main.go:141] libmachine: Using API Version  1
	I0404 22:55:26.569047   65047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:55:26.569379   65047 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:55:26.569543   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetState
	I0404 22:55:26.571698   65047 main.go:141] libmachine: (no-preload-024416) Calling .DriverName
	I0404 22:55:26.572551   65047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.572566   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 22:55:26.572582   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.574538   65047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 22:55:23.019306   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019754   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:23.019781   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:23.019722   66369 retry.go:31] will retry after 1.404874513s: waiting for machine to come up
	I0404 22:55:24.425830   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426412   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:24.426442   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:24.426364   66369 retry.go:31] will retry after 2.757787773s: waiting for machine to come up
	I0404 22:55:26.576073   65047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.576092   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 22:55:26.576106   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHHostname
	I0404 22:55:26.580084   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.580585   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.581225   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.581277   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.581495   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.581699   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.581854   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.582320   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582345   65047 main.go:141] libmachine: (no-preload-024416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:35:e3", ip: ""} in network mk-no-preload-024416: {Iface:virbr2 ExpiryTime:2024-04-04 23:54:46 +0000 UTC Type:0 Mac:52:54:00:9b:35:e3 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:no-preload-024416 Clientid:01:52:54:00:9b:35:e3}
	I0404 22:55:26.582386   65047 main.go:141] libmachine: (no-preload-024416) DBG | domain no-preload-024416 has defined IP address 192.168.50.77 and MAC address 52:54:00:9b:35:e3 in network mk-no-preload-024416
	I0404 22:55:26.582604   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHPort
	I0404 22:55:26.584316   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHKeyPath
	I0404 22:55:26.584463   65047 main.go:141] libmachine: (no-preload-024416) Calling .GetSSHUsername
	I0404 22:55:26.584614   65047 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/no-preload-024416/id_rsa Username:docker}
	I0404 22:55:26.731392   65047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:26.753167   65047 node_ready.go:35] waiting up to 6m0s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:26.829134   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 22:55:26.955286   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 22:55:26.955377   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 22:55:26.968469   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 22:55:26.984915   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 22:55:26.984948   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 22:55:27.028529   65047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.028558   65047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 22:55:27.092343   65047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 22:55:27.186244   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186319   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186642   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.186662   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.186677   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.186690   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.186709   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.186969   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.187017   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.187031   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.193602   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.193623   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.193903   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.193913   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.193920   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878278   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878305   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.878702   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.878746   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.878760   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:27.878779   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:27.878787   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:27.879104   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:27.879197   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:27.879228   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054442   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054471   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.054800   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.054858   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.054874   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.054896   65047 main.go:141] libmachine: Making call to close driver server
	I0404 22:55:28.054905   65047 main.go:141] libmachine: (no-preload-024416) Calling .Close
	I0404 22:55:28.055233   65047 main.go:141] libmachine: Successfully made call to close driver server
	I0404 22:55:28.055259   65047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 22:55:28.055270   65047 addons.go:470] Verifying addon metrics-server=true in "no-preload-024416"
	I0404 22:55:28.055236   65047 main.go:141] libmachine: (no-preload-024416) DBG | Closing plugin on server side
	I0404 22:55:28.057439   65047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0404 22:55:28.058994   65047 addons.go:505] duration metric: took 1.573614168s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0404 22:55:27.842914   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:30.342668   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:26.696179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.809448   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:26.955521   65393 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:26.955614   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.456445   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.956055   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.455728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:28.956667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.455874   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:29.955832   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.455677   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:30.956327   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:31.456660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:27.188296   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188797   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:27.188827   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:27.188759   66369 retry.go:31] will retry after 2.351381492s: waiting for machine to come up
	I0404 22:55:29.541200   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541601   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | unable to find current IP address of domain default-k8s-diff-port-952083 in network mk-default-k8s-diff-port-952083
	I0404 22:55:29.541628   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | I0404 22:55:29.541553   66369 retry.go:31] will retry after 4.132646705s: waiting for machine to come up
	I0404 22:55:28.757883   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:31.257480   65047 node_ready.go:53] node "no-preload-024416" has status "Ready":"False"
	I0404 22:55:32.257641   65047 node_ready.go:49] node "no-preload-024416" has status "Ready":"True"
	I0404 22:55:32.257665   65047 node_ready.go:38] duration metric: took 5.504446554s for node "no-preload-024416" to be "Ready" ...
	I0404 22:55:32.257673   65047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:32.263652   65047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268791   65047 pod_ready.go:92] pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.268811   65047 pod_ready.go:81] duration metric: took 5.13427ms for pod "coredns-7db6d8ff4d-wr424" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.268820   65047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273130   65047 pod_ready.go:92] pod "etcd-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:32.273147   65047 pod_ready.go:81] duration metric: took 4.32194ms for pod "etcd-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.273155   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:32.841480   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:35.342317   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:31.956195   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.456027   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:32.955974   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.456657   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.956113   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.456687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:34.955878   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.456630   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:35.956721   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:36.456247   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:33.678106   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.678633   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Found IP for machine: 192.168.72.148
	I0404 22:55:33.678669   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserving static IP address...
	I0404 22:55:33.678686   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has current primary IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.679110   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.679145   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Reserved static IP address: 192.168.72.148
	I0404 22:55:33.679165   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | skip adding static IP to network mk-default-k8s-diff-port-952083 - found existing host DHCP lease matching {name: "default-k8s-diff-port-952083", mac: "52:54:00:5c:a7:af", ip: "192.168.72.148"}
	I0404 22:55:33.679184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Getting to WaitForSSH function...
	I0404 22:55:33.679196   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Waiting for SSH to be available...
	I0404 22:55:33.681734   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682113   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.682144   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.682283   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH client type: external
	I0404 22:55:33.682325   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Using SSH private key: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa (-rw-------)
	I0404 22:55:33.682356   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0404 22:55:33.682372   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | About to run SSH command:
	I0404 22:55:33.682427   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | exit 0
	I0404 22:55:33.812360   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | SSH cmd err, output: <nil>: 
	I0404 22:55:33.812704   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetConfigRaw
	I0404 22:55:33.813408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:33.816515   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.816970   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.817052   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.817322   64791 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/config.json ...
	I0404 22:55:33.817590   64791 machine.go:94] provisionDockerMachine start ...
	I0404 22:55:33.817615   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:33.817828   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.820061   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820388   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.820421   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.820604   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.820762   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.820948   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.821063   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.821211   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.821433   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.821450   64791 main.go:141] libmachine: About to run SSH command:
	hostname
	I0404 22:55:33.928894   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0404 22:55:33.928927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929163   64791 buildroot.go:166] provisioning hostname "default-k8s-diff-port-952083"
	I0404 22:55:33.929186   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:33.929342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:33.931838   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932292   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:33.932323   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:33.932506   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:33.932688   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932844   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:33.932988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:33.933158   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:33.933321   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:33.933335   64791 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-952083 && echo "default-k8s-diff-port-952083" | sudo tee /etc/hostname
	I0404 22:55:34.060158   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-952083
	
	I0404 22:55:34.060185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.063179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063552   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.063586   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.063777   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.063975   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064172   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.064314   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.064477   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.064628   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.064650   64791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-952083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-952083/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-952083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0404 22:55:34.186212   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0404 22:55:34.186240   64791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16143-5297/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-5297/.minikube}
	I0404 22:55:34.186309   64791 buildroot.go:174] setting up certificates
	I0404 22:55:34.186332   64791 provision.go:84] configureAuth start
	I0404 22:55:34.186351   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetMachineName
	I0404 22:55:34.186637   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:34.189184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189504   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.189544   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.189635   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.191813   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192315   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.192341   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.192507   64791 provision.go:143] copyHostCerts
	I0404 22:55:34.192560   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem, removing ...
	I0404 22:55:34.192569   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem
	I0404 22:55:34.192622   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/key.pem (1675 bytes)
	I0404 22:55:34.192717   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem, removing ...
	I0404 22:55:34.192726   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem
	I0404 22:55:34.192749   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/ca.pem (1078 bytes)
	I0404 22:55:34.192812   64791 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem, removing ...
	I0404 22:55:34.192820   64791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem
	I0404 22:55:34.192838   64791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-5297/.minikube/cert.pem (1123 bytes)
	I0404 22:55:34.192901   64791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-952083 san=[127.0.0.1 192.168.72.148 default-k8s-diff-port-952083 localhost minikube]
	I0404 22:55:34.326920   64791 provision.go:177] copyRemoteCerts
	I0404 22:55:34.326975   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0404 22:55:34.326997   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.329376   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329725   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.329752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.329927   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.330118   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.330260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.330401   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.416395   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0404 22:55:34.443648   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0404 22:55:34.470803   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0404 22:55:34.497420   64791 provision.go:87] duration metric: took 311.071464ms to configureAuth
	I0404 22:55:34.497454   64791 buildroot.go:189] setting minikube options for container-runtime
	I0404 22:55:34.497663   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:55:34.497759   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.500799   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501149   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.501180   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.501387   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.501595   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501779   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.501915   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.502071   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.502271   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.502303   64791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0404 22:55:34.775054   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0404 22:55:34.775085   64791 machine.go:97] duration metric: took 957.478657ms to provisionDockerMachine
	I0404 22:55:34.775099   64791 start.go:293] postStartSetup for "default-k8s-diff-port-952083" (driver="kvm2")
	I0404 22:55:34.775112   64791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0404 22:55:34.775131   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:34.775488   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0404 22:55:34.775534   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.778577   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779005   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.779036   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.779204   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.779393   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.779524   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.779658   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:34.864675   64791 ssh_runner.go:195] Run: cat /etc/os-release
	I0404 22:55:34.869694   64791 info.go:137] Remote host: Buildroot 2023.02.9
	I0404 22:55:34.869730   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/addons for local assets ...
	I0404 22:55:34.869826   64791 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-5297/.minikube/files for local assets ...
	I0404 22:55:34.869920   64791 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem -> 125542.pem in /etc/ssl/certs
	I0404 22:55:34.870061   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0404 22:55:34.880961   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:34.907037   64791 start.go:296] duration metric: took 131.924012ms for postStartSetup
	I0404 22:55:34.907080   64791 fix.go:56] duration metric: took 19.833301571s for fixHost
	I0404 22:55:34.907108   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:34.909893   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910291   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:34.910316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:34.910473   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:34.910679   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.910880   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:34.911020   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:34.911190   64791 main.go:141] libmachine: Using SSH client type: native
	I0404 22:55:34.911412   64791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0404 22:55:34.911428   64791 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0404 22:55:35.022167   64791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712271334.996001157
	
	I0404 22:55:35.022190   64791 fix.go:216] guest clock: 1712271334.996001157
	I0404 22:55:35.022200   64791 fix.go:229] Guest: 2024-04-04 22:55:34.996001157 +0000 UTC Remote: 2024-04-04 22:55:34.907085076 +0000 UTC m=+358.286689706 (delta=88.916081ms)
	I0404 22:55:35.022224   64791 fix.go:200] guest clock delta is within tolerance: 88.916081ms
	I0404 22:55:35.022231   64791 start.go:83] releasing machines lock for "default-k8s-diff-port-952083", held for 19.948498144s
	I0404 22:55:35.022255   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.022485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:35.025707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026147   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.026179   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.026378   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.026876   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027047   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 22:55:35.027120   64791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0404 22:55:35.027159   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.027276   64791 ssh_runner.go:195] Run: cat /version.json
	I0404 22:55:35.027318   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 22:55:35.030408   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030592   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030785   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030819   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.030910   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:35.030929   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.030943   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:35.031146   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031185   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 22:55:35.031317   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031342   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 22:55:35.031485   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 22:55:35.031486   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.031653   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 22:55:35.105523   64791 ssh_runner.go:195] Run: systemctl --version
	I0404 22:55:35.145225   64791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0404 22:55:35.294577   64791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0404 22:55:35.301227   64791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0404 22:55:35.301318   64791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0404 22:55:35.318712   64791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0404 22:55:35.318739   64791 start.go:494] detecting cgroup driver to use...
	I0404 22:55:35.318797   64791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0404 22:55:35.335684   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0404 22:55:35.351068   64791 docker.go:217] disabling cri-docker service (if available) ...
	I0404 22:55:35.351139   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0404 22:55:35.366084   64791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0404 22:55:35.381259   64791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0404 22:55:35.525652   64791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0404 22:55:35.702060   64791 docker.go:233] disabling docker service ...
	I0404 22:55:35.702137   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0404 22:55:35.720037   64791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0404 22:55:35.735605   64791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0404 22:55:35.868176   64791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0404 22:55:36.011977   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0404 22:55:36.030320   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0404 22:55:36.051953   64791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0404 22:55:36.052033   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.063475   64791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0404 22:55:36.063539   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.075238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.086866   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.099524   64791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0404 22:55:36.112514   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.126407   64791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.146238   64791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0404 22:55:36.158896   64791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0404 22:55:36.170410   64791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0404 22:55:36.170482   64791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0404 22:55:36.185608   64791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0404 22:55:36.196706   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:36.327075   64791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0404 22:55:36.470054   64791 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0404 22:55:36.470125   64791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0404 22:55:36.475664   64791 start.go:562] Will wait 60s for crictl version
	I0404 22:55:36.475727   64791 ssh_runner.go:195] Run: which crictl
	I0404 22:55:36.480165   64791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0404 22:55:36.519941   64791 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0404 22:55:36.520021   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.549053   64791 ssh_runner.go:195] Run: crio --version
	I0404 22:55:36.579491   64791 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0404 22:55:36.581026   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetIP
	I0404 22:55:36.583730   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584150   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 22:55:36.584184   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 22:55:36.584371   64791 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0404 22:55:36.588964   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:36.602269   64791 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0404 22:55:36.602374   64791 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 22:55:36.602416   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:36.641719   64791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0404 22:55:36.641787   64791 ssh_runner.go:195] Run: which lz4
	I0404 22:55:36.646084   64791 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0404 22:55:36.650605   64791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0404 22:55:36.650644   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0404 22:55:34.279364   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.288411   65047 pod_ready.go:102] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:37.280858   65047 pod_ready.go:92] pod "kube-apiserver-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.280895   65047 pod_ready.go:81] duration metric: took 5.007733221s for pod "kube-apiserver-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.280913   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286888   65047 pod_ready.go:92] pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.286912   65047 pod_ready.go:81] duration metric: took 5.987359ms for pod "kube-controller-manager-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.286922   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292489   65047 pod_ready.go:92] pod "kube-proxy-zmx89" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.292518   65047 pod_ready.go:81] duration metric: took 5.588199ms for pod "kube-proxy-zmx89" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.292530   65047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298455   65047 pod_ready.go:92] pod "kube-scheduler-no-preload-024416" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:37.298478   65047 pod_ready.go:81] duration metric: took 5.939579ms for pod "kube-scheduler-no-preload-024416" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.298493   65047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:37.342788   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:39.841832   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:36.956068   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.456343   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:37.955665   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.456277   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.956681   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.455983   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:39.956000   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.456576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:40.956669   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.455855   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:38.281827   64791 crio.go:462] duration metric: took 1.635781318s to copy over tarball
	I0404 22:55:38.281962   64791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0404 22:55:40.664660   64791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382669035s)
	I0404 22:55:40.664694   64791 crio.go:469] duration metric: took 2.382835483s to extract the tarball
	I0404 22:55:40.664704   64791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0404 22:55:40.706370   64791 ssh_runner.go:195] Run: sudo crictl images --output json
	I0404 22:55:40.753953   64791 crio.go:514] all images are preloaded for cri-o runtime.
	I0404 22:55:40.753983   64791 cache_images.go:84] Images are preloaded, skipping loading
	I0404 22:55:40.753992   64791 kubeadm.go:928] updating node { 192.168.72.148 8444 v1.29.3 crio true true} ...
	I0404 22:55:40.754096   64791 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-952083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0404 22:55:40.754157   64791 ssh_runner.go:195] Run: crio config
	I0404 22:55:40.809287   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:40.809309   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:40.809318   64791 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0404 22:55:40.809338   64791 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-952083 NodeName:default-k8s-diff-port-952083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0404 22:55:40.809467   64791 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-952083"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0404 22:55:40.809531   64791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0404 22:55:40.821089   64791 binaries.go:44] Found k8s binaries, skipping transfer
	I0404 22:55:40.821151   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0404 22:55:40.832576   64791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0404 22:55:40.853706   64791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0404 22:55:40.874277   64791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0404 22:55:40.895096   64791 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0404 22:55:40.899433   64791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0404 22:55:40.913456   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 22:55:41.041078   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 22:55:41.062340   64791 certs.go:68] Setting up /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083 for IP: 192.168.72.148
	I0404 22:55:41.062382   64791 certs.go:194] generating shared ca certs ...
	I0404 22:55:41.062402   64791 certs.go:226] acquiring lock for ca certs: {Name:mk4fe47cf6261777bba3fe41345d36a5795d4a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 22:55:41.062583   64791 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key
	I0404 22:55:41.062640   64791 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key
	I0404 22:55:41.062662   64791 certs.go:256] generating profile certs ...
	I0404 22:55:41.062776   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/client.key
	I0404 22:55:41.062859   64791 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key.c46373d6
	I0404 22:55:41.062921   64791 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key
	I0404 22:55:41.063037   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem (1338 bytes)
	W0404 22:55:41.063065   64791 certs.go:480] ignoring /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554_empty.pem, impossibly tiny 0 bytes
	I0404 22:55:41.063075   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca-key.pem (1679 bytes)
	I0404 22:55:41.063099   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/ca.pem (1078 bytes)
	I0404 22:55:41.063140   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/cert.pem (1123 bytes)
	I0404 22:55:41.063166   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/certs/key.pem (1675 bytes)
	I0404 22:55:41.063200   64791 certs.go:484] found cert: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem (1708 bytes)
	I0404 22:55:41.063842   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0404 22:55:41.113790   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0404 22:55:41.142967   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0404 22:55:41.174154   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0404 22:55:41.209434   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0404 22:55:41.244064   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0404 22:55:41.272716   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0404 22:55:41.297871   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/default-k8s-diff-port-952083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0404 22:55:41.325547   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/ssl/certs/125542.pem --> /usr/share/ca-certificates/125542.pem (1708 bytes)
	I0404 22:55:41.352050   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0404 22:55:41.377876   64791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-5297/.minikube/certs/12554.pem --> /usr/share/ca-certificates/12554.pem (1338 bytes)
	I0404 22:55:41.404387   64791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0404 22:55:41.423187   64791 ssh_runner.go:195] Run: openssl version
	I0404 22:55:41.429873   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125542.pem && ln -fs /usr/share/ca-certificates/125542.pem /etc/ssl/certs/125542.pem"
	I0404 22:55:41.442164   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447557   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  4 21:40 /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.447623   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125542.pem
	I0404 22:55:41.454131   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125542.pem /etc/ssl/certs/3ec20f2e.0"
	I0404 22:55:41.467223   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0404 22:55:41.480671   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485831   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  4 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.485898   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0404 22:55:41.492136   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0404 22:55:41.505696   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12554.pem && ln -fs /usr/share/ca-certificates/12554.pem /etc/ssl/certs/12554.pem"
	I0404 22:55:41.519176   64791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524158   64791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  4 21:40 /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.524221   64791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12554.pem
	I0404 22:55:41.531176   64791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12554.pem /etc/ssl/certs/51391683.0"
	I0404 22:55:41.543412   64791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0404 22:55:41.548643   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0404 22:55:41.555139   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0404 22:55:41.562284   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0404 22:55:41.569434   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0404 22:55:41.576462   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0404 22:55:41.583084   64791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0404 22:55:41.589945   64791 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-952083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 22:55:41.590031   64791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0404 22:55:41.590093   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:41.632266   64791 cri.go:89] found id: ""
	I0404 22:55:41.632350   64791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0404 22:55:41.644142   64791 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0404 22:55:41.644164   64791 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0404 22:55:41.644170   64791 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0404 22:55:41.644215   64791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0404 22:55:41.656179   64791 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:55:41.657324   64791 kubeconfig.go:125] found "default-k8s-diff-port-952083" server: "https://192.168.72.148:8444"
	I0404 22:55:41.659605   64791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0404 22:55:39.306769   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.307336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:42.038454   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:44.341305   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.341781   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:41.956291   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.455751   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:42.955701   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.456455   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:43.956511   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.456335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.956604   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.456239   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:45.955763   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:46.456691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:41.672105   64791 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.148
	I0404 22:55:42.028665   64791 kubeadm.go:1154] stopping kube-system containers ...
	I0404 22:55:42.028686   64791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0404 22:55:42.028762   64791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0404 22:55:42.091777   64791 cri.go:89] found id: ""
	I0404 22:55:42.091854   64791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0404 22:55:42.113539   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:55:42.124875   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:55:42.124901   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 22:55:42.124954   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 22:55:42.135500   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:55:42.135570   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:55:42.146253   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 22:55:42.156700   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:55:42.156761   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:55:42.168384   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.179440   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:55:42.179516   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:55:42.191004   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 22:55:42.201446   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:55:42.201506   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:55:42.212001   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:55:42.223300   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:42.338171   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.549365   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211152725s)
	I0404 22:55:43.549401   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.801115   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.882593   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:43.959297   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:55:43.959380   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.459491   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.960236   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:44.988764   64791 api_server.go:72] duration metric: took 1.029467706s to wait for apiserver process to appear ...
	I0404 22:55:44.988792   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:55:44.988813   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:43.615360   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:45.804976   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:47.806675   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:48.357574   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.357611   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.357628   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.395772   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0404 22:55:48.395808   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0404 22:55:48.488922   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.505422   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.505481   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:48.988969   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:48.994000   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:48.994032   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.489335   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.500302   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0404 22:55:49.500347   64791 api_server.go:103] status: https://192.168.72.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0404 22:55:49.989893   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 22:55:49.994450   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 22:55:50.001679   64791 api_server.go:141] control plane version: v1.29.3
	I0404 22:55:50.001715   64791 api_server.go:131] duration metric: took 5.012915028s to wait for apiserver health ...
	I0404 22:55:50.001726   64791 cni.go:84] Creating CNI manager for ""
	I0404 22:55:50.001737   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 22:55:50.004063   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 22:55:48.840760   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:50.842564   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:46.956402   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.456719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:47.956495   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.456256   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:48.955795   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.455864   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:49.956545   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.456336   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.956454   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:51.455845   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:50.005634   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 22:55:50.017205   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 22:55:50.041244   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:55:50.050985   64791 system_pods.go:59] 8 kube-system pods found
	I0404 22:55:50.051033   64791 system_pods.go:61] "coredns-76f75df574-psx97" [9e220912-4d45-45d7-85cb-65396d969b27] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0404 22:55:50.051044   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [980bda86-5d40-4762-ae29-9a1d01bcd17f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0404 22:55:50.051055   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [0d3ec134-dc70-445a-a2f8-1c00f6711b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0404 22:55:50.051064   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [d82ba649-a2ae-479e-8aa1-e734bc69f702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0404 22:55:50.051074   64791 system_pods.go:61] "kube-proxy-ssg9w" [23098716-a4cd-4164-a604-fc27b807050f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0404 22:55:50.051081   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [58c1457c-816c-4f10-ac46-be14b4e20592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0404 22:55:50.051089   64791 system_pods.go:61] "metrics-server-57f55c9bc5-zbl54" [f754b7dd-4233-4faa-b8d0-0d42d4c432a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:55:50.051105   64791 system_pods.go:61] "storage-provisioner" [3ea130dd-2282-4094-b3ca-990faf89b79b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0404 22:55:50.051116   64791 system_pods.go:74] duration metric: took 9.852724ms to wait for pod list to return data ...
	I0404 22:55:50.051136   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:55:50.056106   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:55:50.056149   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 22:55:50.056161   64791 node_conditions.go:105] duration metric: took 5.019174ms to run NodePressure ...
	I0404 22:55:50.056180   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0404 22:55:50.366600   64791 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371116   64791 kubeadm.go:733] kubelet initialised
	I0404 22:55:50.371136   64791 kubeadm.go:734] duration metric: took 4.514303ms waiting for restarted kubelet to initialise ...
	I0404 22:55:50.371152   64791 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:55:50.376537   64791 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.382379   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382406   64791 pod_ready.go:81] duration metric: took 5.841107ms for pod "coredns-76f75df574-psx97" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.382415   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "coredns-76f75df574-psx97" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.382421   64791 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.387835   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387862   64791 pod_ready.go:81] duration metric: took 5.433039ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.387874   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.387883   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.395448   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395477   64791 pod_ready.go:81] duration metric: took 7.587276ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.395488   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.395494   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.444766   64791 pod_ready.go:97] node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444796   64791 pod_ready.go:81] duration metric: took 49.295101ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	E0404 22:55:50.444807   64791 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-952083" hosting pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-952083" has status "Ready":"False"
	I0404 22:55:50.444813   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845479   64791 pod_ready.go:92] pod "kube-proxy-ssg9w" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:50.845502   64791 pod_ready.go:81] duration metric: took 400.682428ms for pod "kube-proxy-ssg9w" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.845511   64791 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:50.305975   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.307023   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:52.842745   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:55.341222   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:51.955847   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.456474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.956717   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.456485   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:53.956668   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.455709   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:54.956540   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.455959   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:55.955819   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:56.456025   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:52.852107   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.856385   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:54.806097   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.806696   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.841694   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:00.341504   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:56.955746   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.456752   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.956458   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.455791   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:58.956520   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.456037   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:59.956424   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.456347   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:00.955807   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.456367   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:55:57.365969   64791 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:57.853738   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 22:55:57.853764   64791 pod_ready.go:81] duration metric: took 7.008247096s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:57.853774   64791 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	I0404 22:55:59.862975   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:55:59.305144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.805592   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:02.341599   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.842260   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:01.956676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.455725   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:02.956240   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.456026   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:03.955796   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.455684   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:04.955830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.455658   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:05.956488   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:06.456780   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:01.863099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:04.361040   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.362845   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:03.805883   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.305272   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:08.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:07.341339   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:09.341800   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.342526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:06.956270   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.456242   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:07.956623   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.455980   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.955699   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.456558   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:09.955713   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.455854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:10.955758   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:11.456543   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:08.363849   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.861759   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:10.805375   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:12.808721   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:13.841173   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.842526   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:11.956223   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.456571   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:12.956246   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.456188   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.956319   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.456519   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:14.955719   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.455682   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:15.956653   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:16.456447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:13.361956   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.362846   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:15.306163   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.806194   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:17.845799   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.342615   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:16.955750   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.456215   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.956505   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.456183   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:18.955744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.456227   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:19.955977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.455808   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:20.956276   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.456049   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:17.861226   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:19.861343   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:20.307101   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.805150   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:22.839779   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.841442   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:21.956160   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.456145   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:22.956335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.456579   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:23.956691   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.456562   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:24.956005   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.456432   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:25.956467   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:26.456167   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:21.862937   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.360818   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:24.809913   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.305921   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:27.342163   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.347258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:26.956592   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:26.956691   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:27.005811   65393 cri.go:89] found id: ""
	I0404 22:56:27.005835   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.005842   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:27.005847   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:27.005917   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:27.042188   65393 cri.go:89] found id: ""
	I0404 22:56:27.042211   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.042235   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:27.042241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:27.042286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:27.079865   65393 cri.go:89] found id: ""
	I0404 22:56:27.079892   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.079900   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:27.079906   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:27.079958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:27.116998   65393 cri.go:89] found id: ""
	I0404 22:56:27.117024   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.117031   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:27.117037   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:27.117086   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:27.156188   65393 cri.go:89] found id: ""
	I0404 22:56:27.156214   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.156221   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:27.156236   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:27.156314   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:27.191588   65393 cri.go:89] found id: ""
	I0404 22:56:27.191620   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.191633   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:27.191640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:27.191687   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:27.231075   65393 cri.go:89] found id: ""
	I0404 22:56:27.231107   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.231117   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:27.231124   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:27.231190   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:27.270480   65393 cri.go:89] found id: ""
	I0404 22:56:27.270513   65393 logs.go:276] 0 containers: []
	W0404 22:56:27.270525   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:27.270537   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:27.270561   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:27.324332   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:27.324373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:27.339847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:27.339874   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:27.468846   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:27.468868   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:27.468885   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:27.533979   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:27.534016   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.080447   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:30.094371   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:30.094442   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:30.132187   65393 cri.go:89] found id: ""
	I0404 22:56:30.132211   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.132219   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:30.132225   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:30.132271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:30.173402   65393 cri.go:89] found id: ""
	I0404 22:56:30.173427   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.173437   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:30.173445   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:30.173509   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:30.212702   65393 cri.go:89] found id: ""
	I0404 22:56:30.212759   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.212773   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:30.212784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:30.212857   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:30.267124   65393 cri.go:89] found id: ""
	I0404 22:56:30.267153   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.267164   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:30.267171   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:30.267240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:30.314850   65393 cri.go:89] found id: ""
	I0404 22:56:30.314877   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.314887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:30.314895   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:30.314951   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:30.353961   65393 cri.go:89] found id: ""
	I0404 22:56:30.353985   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.353996   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:30.354003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:30.354065   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:30.393287   65393 cri.go:89] found id: ""
	I0404 22:56:30.393321   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.393333   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:30.393340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:30.393402   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:30.432268   65393 cri.go:89] found id: ""
	I0404 22:56:30.432304   65393 logs.go:276] 0 containers: []
	W0404 22:56:30.432315   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:30.432333   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:30.432349   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:30.498906   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:30.498941   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:30.544676   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:30.544711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:30.595528   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:30.595562   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:30.610773   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:30.610811   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:30.688433   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:26.862939   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.360914   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.360952   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:29.806657   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:32.304653   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:31.841404   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.341994   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:33.188634   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:33.203199   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:33.203262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:33.243222   65393 cri.go:89] found id: ""
	I0404 22:56:33.243250   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.243257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:33.243262   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:33.243330   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:33.284525   65393 cri.go:89] found id: ""
	I0404 22:56:33.284550   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.284560   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:33.284567   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:33.284621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:33.324219   65393 cri.go:89] found id: ""
	I0404 22:56:33.324249   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.324266   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:33.324273   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:33.324328   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:33.362229   65393 cri.go:89] found id: ""
	I0404 22:56:33.362254   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.362265   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:33.362272   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:33.362333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.404616   65393 cri.go:89] found id: ""
	I0404 22:56:33.404651   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.404663   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:33.404671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:33.404741   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:33.445123   65393 cri.go:89] found id: ""
	I0404 22:56:33.445150   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.445160   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:33.445168   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:33.445227   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:33.485996   65393 cri.go:89] found id: ""
	I0404 22:56:33.486025   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.486033   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:33.486041   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:33.486108   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:33.524275   65393 cri.go:89] found id: ""
	I0404 22:56:33.524299   65393 logs.go:276] 0 containers: []
	W0404 22:56:33.524307   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:33.524315   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:33.524326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:33.577095   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:33.577133   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:33.592799   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:33.592830   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:33.672784   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:33.672804   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:33.672815   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:33.748049   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:33.748091   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:36.293335   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:36.308303   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:36.308395   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:36.346627   65393 cri.go:89] found id: ""
	I0404 22:56:36.346648   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.346656   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:36.346661   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:36.346704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:36.386923   65393 cri.go:89] found id: ""
	I0404 22:56:36.386950   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.386958   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:36.386963   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:36.387011   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:36.427795   65393 cri.go:89] found id: ""
	I0404 22:56:36.427824   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.427832   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:36.427838   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:36.427898   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:36.464754   65393 cri.go:89] found id: ""
	I0404 22:56:36.464780   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.464790   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:36.464797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:36.464864   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:33.361764   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:35.861327   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:34.807482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:37.305470   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.842142   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:38.843672   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.341676   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:36.499502   65393 cri.go:89] found id: ""
	I0404 22:56:36.499529   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.499540   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:36.499549   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:36.499610   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:36.536686   65393 cri.go:89] found id: ""
	I0404 22:56:36.536716   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.536726   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:36.536734   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:36.536782   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:36.572573   65393 cri.go:89] found id: ""
	I0404 22:56:36.572602   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.572614   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:36.572623   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:36.572683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:36.607444   65393 cri.go:89] found id: ""
	I0404 22:56:36.607474   65393 logs.go:276] 0 containers: []
	W0404 22:56:36.607483   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:36.607491   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:36.607502   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:36.662990   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:36.663040   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:36.677996   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:36.678019   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:36.751850   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:36.751871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:36.751886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:36.831451   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:36.831484   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:39.393170   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:39.407581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:39.407640   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:39.441850   65393 cri.go:89] found id: ""
	I0404 22:56:39.441879   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.441889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:39.441896   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:39.441981   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:39.476890   65393 cri.go:89] found id: ""
	I0404 22:56:39.476919   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.476931   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:39.476938   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:39.477001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:39.511497   65393 cri.go:89] found id: ""
	I0404 22:56:39.511523   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.511534   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:39.511540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:39.511605   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:39.549485   65393 cri.go:89] found id: ""
	I0404 22:56:39.549514   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.549526   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:39.549534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:39.549594   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:39.589203   65393 cri.go:89] found id: ""
	I0404 22:56:39.589234   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.589243   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:39.589249   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:39.589311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:39.626903   65393 cri.go:89] found id: ""
	I0404 22:56:39.626926   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.626939   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:39.626946   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:39.627008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:39.660975   65393 cri.go:89] found id: ""
	I0404 22:56:39.660999   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.661007   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:39.661016   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:39.661067   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:39.697162   65393 cri.go:89] found id: ""
	I0404 22:56:39.697187   65393 logs.go:276] 0 containers: []
	W0404 22:56:39.697195   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:39.697203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:39.697217   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:39.755642   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:39.755683   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:39.773016   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:39.773052   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:39.849466   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:39.849491   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:39.849507   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:39.940680   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:39.940716   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:38.360638   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:40.361088   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:39.305528   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:41.307136   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.844830   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:46.342481   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:42.486573   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:42.500548   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:42.500612   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:42.539890   65393 cri.go:89] found id: ""
	I0404 22:56:42.539926   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.539954   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:42.539964   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:42.540030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:42.576664   65393 cri.go:89] found id: ""
	I0404 22:56:42.576699   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.576709   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:42.576715   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:42.576816   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:42.615456   65393 cri.go:89] found id: ""
	I0404 22:56:42.615489   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.615500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:42.615507   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:42.615569   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:42.667886   65393 cri.go:89] found id: ""
	I0404 22:56:42.667917   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.667928   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:42.667935   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:42.668001   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:42.726120   65393 cri.go:89] found id: ""
	I0404 22:56:42.726150   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.726162   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:42.726169   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:42.726233   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:42.781280   65393 cri.go:89] found id: ""
	I0404 22:56:42.781305   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.781316   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:42.781322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:42.781386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:42.818419   65393 cri.go:89] found id: ""
	I0404 22:56:42.818449   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.818459   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:42.818466   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:42.818531   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:42.867866   65393 cri.go:89] found id: ""
	I0404 22:56:42.867902   65393 logs.go:276] 0 containers: []
	W0404 22:56:42.867911   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:42.867920   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:42.867935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:42.953141   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:42.953186   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:42.994936   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:42.994968   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:43.047257   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:43.047288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:43.062391   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:43.062426   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:43.138948   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:45.639469   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:45.654584   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:45.654662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:45.693244   65393 cri.go:89] found id: ""
	I0404 22:56:45.693267   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.693276   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:45.693281   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:45.693335   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:45.732468   65393 cri.go:89] found id: ""
	I0404 22:56:45.732501   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.732513   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:45.732520   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:45.732587   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:45.769504   65393 cri.go:89] found id: ""
	I0404 22:56:45.769538   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.769552   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:45.769560   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:45.769625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:45.815840   65393 cri.go:89] found id: ""
	I0404 22:56:45.815864   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.815872   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:45.815877   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:45.815987   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:45.856479   65393 cri.go:89] found id: ""
	I0404 22:56:45.856511   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.856522   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:45.856530   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:45.856596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:45.896703   65393 cri.go:89] found id: ""
	I0404 22:56:45.896726   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.896734   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:45.896739   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:45.896797   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:45.938227   65393 cri.go:89] found id: ""
	I0404 22:56:45.938253   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.938261   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:45.938266   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:45.938349   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:45.975762   65393 cri.go:89] found id: ""
	I0404 22:56:45.975790   65393 logs.go:276] 0 containers: []
	W0404 22:56:45.975801   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:45.975811   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:45.975823   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:46.056563   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:46.056599   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.099210   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:46.099242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:46.153524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:46.153560   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:46.169268   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:46.169297   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:46.244893   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:42.361586   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:44.860609   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:43.806644   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:45.807773   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:47.807842   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.841498   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.341274   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.745334   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:48.760547   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:48.760625   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:48.799676   65393 cri.go:89] found id: ""
	I0404 22:56:48.799699   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.799706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:48.799721   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:48.799780   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:48.843434   65393 cri.go:89] found id: ""
	I0404 22:56:48.843467   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.843476   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:48.843481   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:48.843544   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:48.895391   65393 cri.go:89] found id: ""
	I0404 22:56:48.895421   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.895440   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:48.895448   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:48.895513   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:48.936222   65393 cri.go:89] found id: ""
	I0404 22:56:48.936252   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.936263   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:48.936271   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:48.936334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:48.974533   65393 cri.go:89] found id: ""
	I0404 22:56:48.974563   65393 logs.go:276] 0 containers: []
	W0404 22:56:48.974570   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:48.974575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:48.974629   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:49.015378   65393 cri.go:89] found id: ""
	I0404 22:56:49.015406   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.015424   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:49.015440   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:49.015501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:49.056623   65393 cri.go:89] found id: ""
	I0404 22:56:49.056647   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.056658   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:49.056664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:49.056725   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:49.102414   65393 cri.go:89] found id: ""
	I0404 22:56:49.102442   65393 logs.go:276] 0 containers: []
	W0404 22:56:49.102453   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:49.102464   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:49.102476   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:49.158193   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:49.158240   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:49.173121   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:49.173147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:49.248973   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:49.249000   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:49.249017   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:49.341732   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:49.341778   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:46.861623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:48.863321   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.362926   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:50.305198   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:52.305614   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:53.348777   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:55.848753   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:51.888403   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:51.903442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:51.903503   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:51.941672   65393 cri.go:89] found id: ""
	I0404 22:56:51.941698   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.941706   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:51.941712   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:51.941766   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:51.981373   65393 cri.go:89] found id: ""
	I0404 22:56:51.981396   65393 logs.go:276] 0 containers: []
	W0404 22:56:51.981408   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:51.981413   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:51.981460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:52.019539   65393 cri.go:89] found id: ""
	I0404 22:56:52.019567   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.019575   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:52.019581   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:52.019645   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:52.059015   65393 cri.go:89] found id: ""
	I0404 22:56:52.059050   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.059060   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:52.059073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:52.059134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:52.099879   65393 cri.go:89] found id: ""
	I0404 22:56:52.099904   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.099911   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:52.099917   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:52.099975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:52.140665   65393 cri.go:89] found id: ""
	I0404 22:56:52.140739   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.140752   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:52.140761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:52.140833   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:52.182063   65393 cri.go:89] found id: ""
	I0404 22:56:52.182091   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.182100   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:52.182106   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:52.182161   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:52.218610   65393 cri.go:89] found id: ""
	I0404 22:56:52.218636   65393 logs.go:276] 0 containers: []
	W0404 22:56:52.218644   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:52.218651   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:52.218666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:52.232987   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:52.233014   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:52.307317   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:52.307343   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:52.307358   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:52.390484   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:52.390522   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:52.437568   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:52.437601   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:54.995830   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:55.010758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:55.010818   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:55.057312   65393 cri.go:89] found id: ""
	I0404 22:56:55.057342   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.057354   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:55.057362   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:55.057440   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:55.097774   65393 cri.go:89] found id: ""
	I0404 22:56:55.097803   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.097815   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:55.097822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:55.097881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:55.134169   65393 cri.go:89] found id: ""
	I0404 22:56:55.134198   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.134207   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:55.134215   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:55.134268   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:55.174151   65393 cri.go:89] found id: ""
	I0404 22:56:55.174191   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.174203   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:55.174210   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:55.174297   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:55.213618   65393 cri.go:89] found id: ""
	I0404 22:56:55.213646   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.213655   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:55.213661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:55.213722   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:55.252406   65393 cri.go:89] found id: ""
	I0404 22:56:55.252435   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.252446   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:55.252455   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:55.252536   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:55.289595   65393 cri.go:89] found id: ""
	I0404 22:56:55.289623   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.289633   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:55.289640   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:55.289702   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:55.328579   65393 cri.go:89] found id: ""
	I0404 22:56:55.328611   65393 logs.go:276] 0 containers: []
	W0404 22:56:55.328621   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:55.328632   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:55.328647   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:55.384434   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:55.384470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:55.399311   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:55.399336   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:55.478341   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:55.478370   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:55.478404   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:55.560209   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:55.560242   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:56:53.861243   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.361466   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:54.306859   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:56.804903   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.341302   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.342865   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.104854   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:56:58.119735   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:56:58.119813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:56:58.158145   65393 cri.go:89] found id: ""
	I0404 22:56:58.158169   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.158182   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:56:58.158190   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:56:58.158255   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:56:58.198676   65393 cri.go:89] found id: ""
	I0404 22:56:58.198703   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.198714   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:56:58.198721   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:56:58.198779   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:56:58.236393   65393 cri.go:89] found id: ""
	I0404 22:56:58.236419   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.236430   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:56:58.236436   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:56:58.236505   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:56:58.274688   65393 cri.go:89] found id: ""
	I0404 22:56:58.274714   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.274724   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:56:58.274731   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:56:58.274796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:56:58.311908   65393 cri.go:89] found id: ""
	I0404 22:56:58.311935   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.311947   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:56:58.311956   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:56:58.312016   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:56:58.353477   65393 cri.go:89] found id: ""
	I0404 22:56:58.353500   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.353513   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:56:58.353518   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:56:58.353577   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.390981   65393 cri.go:89] found id: ""
	I0404 22:56:58.391004   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.391012   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:56:58.391017   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:56:58.391084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:56:58.433125   65393 cri.go:89] found id: ""
	I0404 22:56:58.433147   65393 logs.go:276] 0 containers: []
	W0404 22:56:58.433160   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:56:58.433168   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:56:58.433181   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:56:58.488214   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:56:58.488251   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:56:58.503274   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:56:58.503315   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:56:58.583124   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:56:58.583149   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:56:58.583167   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:56:58.662856   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:56:58.662889   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:01.204061   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:01.219133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:01.219213   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:01.256615   65393 cri.go:89] found id: ""
	I0404 22:57:01.256641   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.256650   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:01.256655   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:01.256704   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:01.294765   65393 cri.go:89] found id: ""
	I0404 22:57:01.294796   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.294805   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:01.294813   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:01.294874   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:01.334936   65393 cri.go:89] found id: ""
	I0404 22:57:01.334966   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.334976   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:01.334983   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:01.335030   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:01.382389   65393 cri.go:89] found id: ""
	I0404 22:57:01.382429   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.382438   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:01.382443   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:01.382491   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:01.421962   65393 cri.go:89] found id: ""
	I0404 22:57:01.422001   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.422013   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:01.422020   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:01.422084   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:01.460394   65393 cri.go:89] found id: ""
	I0404 22:57:01.460424   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.460432   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:01.460437   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:01.460484   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:56:58.365468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:00.862158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:56:58.807541   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.305535   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:03.305610   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:02.841121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:04.841717   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:01.497525   65393 cri.go:89] found id: ""
	I0404 22:57:01.497552   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.497561   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:01.497566   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:01.497623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:01.535396   65393 cri.go:89] found id: ""
	I0404 22:57:01.535431   65393 logs.go:276] 0 containers: []
	W0404 22:57:01.535442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:01.535454   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:01.535469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:01.588397   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:01.588437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:01.603396   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:01.603427   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:01.686312   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:01.686332   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:01.686344   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:01.771352   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:01.771393   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.316206   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:04.330612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:04.330677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:04.377998   65393 cri.go:89] found id: ""
	I0404 22:57:04.378023   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.378031   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:04.378036   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:04.378098   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:04.421775   65393 cri.go:89] found id: ""
	I0404 22:57:04.421811   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.421822   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:04.421830   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:04.421894   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:04.463234   65393 cri.go:89] found id: ""
	I0404 22:57:04.463265   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.463276   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:04.463284   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:04.463347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:04.510879   65393 cri.go:89] found id: ""
	I0404 22:57:04.510907   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.510916   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:04.510925   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:04.511012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:04.552305   65393 cri.go:89] found id: ""
	I0404 22:57:04.552336   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.552348   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:04.552356   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:04.552413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:04.592631   65393 cri.go:89] found id: ""
	I0404 22:57:04.592661   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.592672   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:04.592679   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:04.592739   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:04.631854   65393 cri.go:89] found id: ""
	I0404 22:57:04.631883   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.631893   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:04.631900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:04.631966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:04.670530   65393 cri.go:89] found id: ""
	I0404 22:57:04.670554   65393 logs.go:276] 0 containers: []
	W0404 22:57:04.670563   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:04.670570   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:04.670582   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:04.685546   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:04.685575   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:04.770377   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:04.770400   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:04.770420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:04.853637   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:04.853674   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:04.899690   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:04.899719   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:03.362881   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.363597   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:05.306377   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.805659   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:06.842313   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.341347   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:07.451203   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:07.465593   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:07.465665   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:07.505475   65393 cri.go:89] found id: ""
	I0404 22:57:07.505504   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.505516   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:07.505524   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:07.505584   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:07.543744   65393 cri.go:89] found id: ""
	I0404 22:57:07.543802   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.543814   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:07.543822   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:07.543891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:07.586034   65393 cri.go:89] found id: ""
	I0404 22:57:07.586059   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.586067   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:07.586073   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:07.586133   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:07.624880   65393 cri.go:89] found id: ""
	I0404 22:57:07.624908   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.624917   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:07.624932   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:07.624992   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:07.667653   65393 cri.go:89] found id: ""
	I0404 22:57:07.667684   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.667696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:07.667704   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:07.667798   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:07.704434   65393 cri.go:89] found id: ""
	I0404 22:57:07.704466   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.704478   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:07.704486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:07.704547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:07.741679   65393 cri.go:89] found id: ""
	I0404 22:57:07.741702   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.741710   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:07.741715   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:07.741760   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:07.782407   65393 cri.go:89] found id: ""
	I0404 22:57:07.782434   65393 logs.go:276] 0 containers: []
	W0404 22:57:07.782442   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:07.782450   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:07.782461   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:07.796141   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:07.796175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:07.880985   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:07.881004   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:07.881015   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:07.963986   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:07.964027   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:08.010874   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:08.010907   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.564802   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:10.581360   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:10.581430   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:10.619856   65393 cri.go:89] found id: ""
	I0404 22:57:10.619882   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.619889   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:10.619895   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:10.619958   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:10.662867   65393 cri.go:89] found id: ""
	I0404 22:57:10.662893   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.662900   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:10.662906   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:10.662966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:10.705241   65393 cri.go:89] found id: ""
	I0404 22:57:10.705277   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.705290   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:10.705298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:10.705363   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:10.743811   65393 cri.go:89] found id: ""
	I0404 22:57:10.743844   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.743855   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:10.743863   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:10.743918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:10.783648   65393 cri.go:89] found id: ""
	I0404 22:57:10.783672   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.783684   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:10.783690   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:10.783743   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:10.825606   65393 cri.go:89] found id: ""
	I0404 22:57:10.825639   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.825649   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:10.825657   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:10.825712   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:10.865138   65393 cri.go:89] found id: ""
	I0404 22:57:10.865168   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.865178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:10.865185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:10.865238   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:10.906855   65393 cri.go:89] found id: ""
	I0404 22:57:10.906881   65393 logs.go:276] 0 containers: []
	W0404 22:57:10.906888   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:10.906895   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:10.906937   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:10.960341   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:10.960375   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:10.975199   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:10.975228   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:11.083944   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:11.083970   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:11.083985   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:11.166754   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:11.166794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:07.860607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.864277   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:09.806222   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:12.305904   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:11.841055   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.841573   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:15.843393   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:13.708326   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:13.724345   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:13.724413   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:13.766699   65393 cri.go:89] found id: ""
	I0404 22:57:13.766732   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.766744   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:13.766751   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:13.766813   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:13.804517   65393 cri.go:89] found id: ""
	I0404 22:57:13.804545   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.804556   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:13.804564   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:13.804623   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:13.846168   65393 cri.go:89] found id: ""
	I0404 22:57:13.846202   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.846213   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:13.846219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:13.846278   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:13.894719   65393 cri.go:89] found id: ""
	I0404 22:57:13.894743   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.894753   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:13.894761   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:13.894823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:13.929981   65393 cri.go:89] found id: ""
	I0404 22:57:13.930013   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.930024   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:13.930031   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:13.930102   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:13.968531   65393 cri.go:89] found id: ""
	I0404 22:57:13.968578   65393 logs.go:276] 0 containers: []
	W0404 22:57:13.968590   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:13.968598   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:13.968662   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:14.012460   65393 cri.go:89] found id: ""
	I0404 22:57:14.012492   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.012502   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:14.012509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:14.012571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:14.051175   65393 cri.go:89] found id: ""
	I0404 22:57:14.051207   65393 logs.go:276] 0 containers: []
	W0404 22:57:14.051218   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:14.051228   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:14.051243   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:14.107968   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:14.108004   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:14.123756   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:14.123794   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:14.209452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:14.209475   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:14.209493   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:14.287297   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:14.287334   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:11.864620   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.360720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.361734   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:14.307191   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.806031   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.341533   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.344059   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:16.832667   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:16.850323   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:16.850396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:16.897389   65393 cri.go:89] found id: ""
	I0404 22:57:16.897415   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.897426   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:16.897433   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:16.897502   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:16.938893   65393 cri.go:89] found id: ""
	I0404 22:57:16.938927   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.938939   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:16.938945   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:16.939005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:16.978621   65393 cri.go:89] found id: ""
	I0404 22:57:16.978648   65393 logs.go:276] 0 containers: []
	W0404 22:57:16.978655   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:16.978661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:16.978707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:17.036499   65393 cri.go:89] found id: ""
	I0404 22:57:17.036523   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.036532   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:17.036539   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:17.036596   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:17.092932   65393 cri.go:89] found id: ""
	I0404 22:57:17.092958   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.092966   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:17.092972   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:17.093026   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:17.129925   65393 cri.go:89] found id: ""
	I0404 22:57:17.129948   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.129956   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:17.129963   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:17.130025   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:17.168192   65393 cri.go:89] found id: ""
	I0404 22:57:17.168218   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.168232   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:17.168238   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:17.168317   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:17.213090   65393 cri.go:89] found id: ""
	I0404 22:57:17.213115   65393 logs.go:276] 0 containers: []
	W0404 22:57:17.213125   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:17.213136   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:17.213149   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:17.295489   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:17.295527   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:17.350780   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:17.350832   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:17.403580   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:17.403621   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:17.419093   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:17.419131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:17.503614   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.004676   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:20.019340   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:20.019421   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:20.059400   65393 cri.go:89] found id: ""
	I0404 22:57:20.059431   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.059442   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:20.059448   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:20.059510   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:20.104769   65393 cri.go:89] found id: ""
	I0404 22:57:20.104796   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.104808   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:20.104815   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:20.104875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:20.143209   65393 cri.go:89] found id: ""
	I0404 22:57:20.143233   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.143241   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:20.143248   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:20.143300   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:20.187941   65393 cri.go:89] found id: ""
	I0404 22:57:20.187976   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.187987   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:20.187995   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:20.188082   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:20.231158   65393 cri.go:89] found id: ""
	I0404 22:57:20.231192   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.231200   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:20.231206   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:20.231271   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:20.281069   65393 cri.go:89] found id: ""
	I0404 22:57:20.281102   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.281113   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:20.281121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:20.281187   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:20.320403   65393 cri.go:89] found id: ""
	I0404 22:57:20.320436   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.320448   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:20.320456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:20.320529   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:20.371202   65393 cri.go:89] found id: ""
	I0404 22:57:20.371229   65393 logs.go:276] 0 containers: []
	W0404 22:57:20.371237   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:20.371246   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:20.371256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:20.416698   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:20.416725   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:20.468328   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:20.468362   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:20.483250   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:20.483274   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:20.562841   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:20.562871   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:20.562886   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:18.363136   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:20.860719   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:18.806337   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:21.306098   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:22.841457   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:24.843001   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.141477   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:23.157859   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:23.157924   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:23.203492   65393 cri.go:89] found id: ""
	I0404 22:57:23.203526   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.203538   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:23.203545   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:23.203613   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:23.248190   65393 cri.go:89] found id: ""
	I0404 22:57:23.248220   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.248231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:23.248244   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:23.248310   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:23.284427   65393 cri.go:89] found id: ""
	I0404 22:57:23.284456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.284467   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:23.284475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:23.284562   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:23.322429   65393 cri.go:89] found id: ""
	I0404 22:57:23.322456   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.322464   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:23.322469   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:23.322534   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:23.364030   65393 cri.go:89] found id: ""
	I0404 22:57:23.364069   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.364080   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:23.364087   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:23.364168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:23.408306   65393 cri.go:89] found id: ""
	I0404 22:57:23.408343   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.408356   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:23.408363   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:23.408423   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:23.452933   65393 cri.go:89] found id: ""
	I0404 22:57:23.452968   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.452976   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:23.452982   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:23.453036   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:23.494156   65393 cri.go:89] found id: ""
	I0404 22:57:23.494184   65393 logs.go:276] 0 containers: []
	W0404 22:57:23.494193   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:23.494203   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:23.494222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:23.548013   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:23.548053   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:23.564765   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:23.564797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:23.642661   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:23.642685   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:23.642700   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:23.737958   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:23.737996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:26.290576   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:26.307580   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:26.307641   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:26.345971   65393 cri.go:89] found id: ""
	I0404 22:57:26.346000   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.346011   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:26.346018   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:26.346077   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:26.386979   65393 cri.go:89] found id: ""
	I0404 22:57:26.387009   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.387019   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:26.387026   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:26.387112   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:26.430627   65393 cri.go:89] found id: ""
	I0404 22:57:26.430649   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.430665   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:26.430671   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:26.430724   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:26.469657   65393 cri.go:89] found id: ""
	I0404 22:57:26.469705   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.469716   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:26.469723   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:26.469794   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:22.861245   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.360840   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:23.804732   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:25.805727   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.805819   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:27.343674   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.843199   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:26.513853   65393 cri.go:89] found id: ""
	I0404 22:57:26.513877   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.513887   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:26.513894   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:26.513954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:26.557702   65393 cri.go:89] found id: ""
	I0404 22:57:26.557731   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.557742   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:26.557749   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:26.557807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:26.595248   65393 cri.go:89] found id: ""
	I0404 22:57:26.595279   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.595291   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:26.595298   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:26.595364   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:26.634552   65393 cri.go:89] found id: ""
	I0404 22:57:26.634581   65393 logs.go:276] 0 containers: []
	W0404 22:57:26.634591   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:26.634603   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:26.634618   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:26.686928   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:26.686963   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:26.701308   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:26.701341   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:26.785243   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:26.785267   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:26.785286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:26.867513   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:26.867555   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.416286   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:29.434234   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:29.434308   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:29.488598   65393 cri.go:89] found id: ""
	I0404 22:57:29.488628   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.488637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:29.488643   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:29.488727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:29.541107   65393 cri.go:89] found id: ""
	I0404 22:57:29.541130   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.541138   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:29.541144   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:29.541204   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:29.597144   65393 cri.go:89] found id: ""
	I0404 22:57:29.597174   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.597186   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:29.597192   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:29.597258   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:29.636833   65393 cri.go:89] found id: ""
	I0404 22:57:29.636865   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.636874   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:29.636881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:29.636961   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:29.672867   65393 cri.go:89] found id: ""
	I0404 22:57:29.672893   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.672903   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:29.672909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:29.672970   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:29.713135   65393 cri.go:89] found id: ""
	I0404 22:57:29.713168   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.713180   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:29.713188   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:29.713279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:29.750862   65393 cri.go:89] found id: ""
	I0404 22:57:29.750895   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.750906   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:29.750914   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:29.750965   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:29.790274   65393 cri.go:89] found id: ""
	I0404 22:57:29.790302   65393 logs.go:276] 0 containers: []
	W0404 22:57:29.790311   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:29.790320   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:29.790332   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:29.831848   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:29.831884   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:29.889443   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:29.889478   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:29.905468   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:29.905500   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:29.983283   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:29.983313   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:29.983328   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:27.862273   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:30.361189   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:29.806568   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:31.807248   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.341638   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.841257   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:32.567351   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:32.581988   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:32.582052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:32.619164   65393 cri.go:89] found id: ""
	I0404 22:57:32.619191   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.619201   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:32.619208   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:32.619262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:32.656844   65393 cri.go:89] found id: ""
	I0404 22:57:32.656882   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.656894   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:32.656902   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:32.656966   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:32.693024   65393 cri.go:89] found id: ""
	I0404 22:57:32.693052   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.693064   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:32.693071   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:32.693134   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:32.732579   65393 cri.go:89] found id: ""
	I0404 22:57:32.732609   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.732618   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:32.732625   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:32.732685   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:32.777586   65393 cri.go:89] found id: ""
	I0404 22:57:32.777624   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.777634   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:32.777644   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:32.777713   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:32.817079   65393 cri.go:89] found id: ""
	I0404 22:57:32.817115   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.817127   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:32.817136   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:32.817199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:32.856945   65393 cri.go:89] found id: ""
	I0404 22:57:32.856978   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.856988   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:32.856994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:32.857047   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:32.895047   65393 cri.go:89] found id: ""
	I0404 22:57:32.895070   65393 logs.go:276] 0 containers: []
	W0404 22:57:32.895082   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:32.895090   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:32.895103   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.949865   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:32.949904   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:32.964629   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:32.964656   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:33.039424   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:33.039446   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:33.039458   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:33.115819   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:33.115854   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:35.659512   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:35.674512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:35.674589   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:35.714058   65393 cri.go:89] found id: ""
	I0404 22:57:35.714093   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.714102   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:35.714109   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:35.714173   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:35.753765   65393 cri.go:89] found id: ""
	I0404 22:57:35.753800   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.753811   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:35.753819   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:35.753883   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:35.792177   65393 cri.go:89] found id: ""
	I0404 22:57:35.792204   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.792216   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:35.792223   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:35.792288   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:35.829891   65393 cri.go:89] found id: ""
	I0404 22:57:35.829923   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.829943   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:35.829952   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:35.830012   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:35.872224   65393 cri.go:89] found id: ""
	I0404 22:57:35.872255   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.872267   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:35.872276   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:35.872341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:35.909984   65393 cri.go:89] found id: ""
	I0404 22:57:35.910010   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.910020   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:35.910027   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:35.910088   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:35.948755   65393 cri.go:89] found id: ""
	I0404 22:57:35.948784   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.948795   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:35.948805   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:35.948868   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:35.984167   65393 cri.go:89] found id: ""
	I0404 22:57:35.984195   65393 logs.go:276] 0 containers: []
	W0404 22:57:35.984203   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:35.984212   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:35.984224   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:35.998714   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:35.998740   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:36.070003   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:36.070028   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:36.070044   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:36.149404   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:36.149442   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:36.190749   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:36.190776   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:32.362424   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.861745   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:34.306625   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.805161   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:36.841627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.341476   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:38.747728   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:38.761768   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:38.761840   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:38.802423   65393 cri.go:89] found id: ""
	I0404 22:57:38.802456   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.802467   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:38.802474   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:38.802525   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:38.843476   65393 cri.go:89] found id: ""
	I0404 22:57:38.843500   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.843508   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:38.843513   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:38.843583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:38.883093   65393 cri.go:89] found id: ""
	I0404 22:57:38.883125   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.883136   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:38.883145   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:38.883203   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:38.919802   65393 cri.go:89] found id: ""
	I0404 22:57:38.919831   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.919840   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:38.919847   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:38.919914   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:38.955242   65393 cri.go:89] found id: ""
	I0404 22:57:38.955280   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.955294   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:38.955302   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:38.955366   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:38.993377   65393 cri.go:89] found id: ""
	I0404 22:57:38.993409   65393 logs.go:276] 0 containers: []
	W0404 22:57:38.993420   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:38.993428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:38.993486   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:39.033379   65393 cri.go:89] found id: ""
	I0404 22:57:39.033406   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.033417   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:39.033424   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:39.033499   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:39.076827   65393 cri.go:89] found id: ""
	I0404 22:57:39.076853   65393 logs.go:276] 0 containers: []
	W0404 22:57:39.076862   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:39.076870   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:39.076880   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:39.134007   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:39.134045   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:39.149271   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:39.149298   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:39.222569   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:39.222593   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:39.222608   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:39.309583   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:39.309613   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:37.367650   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.862967   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:39.305489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.807107   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.842108   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.340297   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.343170   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:41.850659   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:41.866430   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:41.866493   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:41.906390   65393 cri.go:89] found id: ""
	I0404 22:57:41.906418   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.906437   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:41.906445   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:41.906506   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:41.942002   65393 cri.go:89] found id: ""
	I0404 22:57:41.942029   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.942039   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:41.942047   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:41.942105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:41.984054   65393 cri.go:89] found id: ""
	I0404 22:57:41.984078   65393 logs.go:276] 0 containers: []
	W0404 22:57:41.984086   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:41.984092   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:41.984167   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:42.022757   65393 cri.go:89] found id: ""
	I0404 22:57:42.022779   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.022787   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:42.022792   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:42.022869   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:42.063261   65393 cri.go:89] found id: ""
	I0404 22:57:42.063284   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.063292   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:42.063297   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:42.063347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:42.102752   65393 cri.go:89] found id: ""
	I0404 22:57:42.102783   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.102800   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:42.102809   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:42.102872   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:42.139533   65393 cri.go:89] found id: ""
	I0404 22:57:42.139561   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.139570   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:42.139576   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:42.139627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:42.180944   65393 cri.go:89] found id: ""
	I0404 22:57:42.180976   65393 logs.go:276] 0 containers: []
	W0404 22:57:42.180986   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:42.180996   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:42.181008   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:42.231499   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:42.231533   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:42.247888   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:42.247918   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:42.327828   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:42.327849   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:42.327860   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:42.413471   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:42.413509   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:44.958686   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:44.973081   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:44.973159   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:45.011068   65393 cri.go:89] found id: ""
	I0404 22:57:45.011095   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.011103   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:45.011108   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:45.011162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:45.052522   65393 cri.go:89] found id: ""
	I0404 22:57:45.052560   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.052578   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:45.052586   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:45.052649   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:45.091172   65393 cri.go:89] found id: ""
	I0404 22:57:45.091204   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.091215   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:45.091222   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:45.091289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:45.129853   65393 cri.go:89] found id: ""
	I0404 22:57:45.129883   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.129892   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:45.129900   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:45.129960   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:45.167779   65393 cri.go:89] found id: ""
	I0404 22:57:45.167806   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.167815   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:45.167822   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:45.167881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:45.205111   65393 cri.go:89] found id: ""
	I0404 22:57:45.205139   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.205153   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:45.205158   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:45.205231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:45.240931   65393 cri.go:89] found id: ""
	I0404 22:57:45.240957   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.240965   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:45.240971   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:45.241033   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:45.277934   65393 cri.go:89] found id: ""
	I0404 22:57:45.277956   65393 logs.go:276] 0 containers: []
	W0404 22:57:45.277964   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:45.277974   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:45.277989   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:45.332725   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:45.332755   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:45.349776   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:45.349806   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:45.428071   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:45.428098   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:45.428113   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:45.510148   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:45.510188   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:42.361878   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.861076   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:44.304994   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:46.305336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.841019   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.341801   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.052655   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:48.067339   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:48.067416   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:48.101725   65393 cri.go:89] found id: ""
	I0404 22:57:48.101756   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.101765   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:48.101771   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:48.101823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:48.137109   65393 cri.go:89] found id: ""
	I0404 22:57:48.137136   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.137147   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:48.137153   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:48.137216   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:48.174703   65393 cri.go:89] found id: ""
	I0404 22:57:48.174735   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.174745   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:48.174751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:48.174800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:48.213531   65393 cri.go:89] found id: ""
	I0404 22:57:48.213554   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.213565   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:48.213572   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:48.213621   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:48.252457   65393 cri.go:89] found id: ""
	I0404 22:57:48.252483   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.252493   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:48.252500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:48.252566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:48.297769   65393 cri.go:89] found id: ""
	I0404 22:57:48.297797   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.297807   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:48.297815   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:48.297871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:48.336275   65393 cri.go:89] found id: ""
	I0404 22:57:48.336297   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.336312   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:48.336319   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:48.336373   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:48.375192   65393 cri.go:89] found id: ""
	I0404 22:57:48.375220   65393 logs.go:276] 0 containers: []
	W0404 22:57:48.375230   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:48.375242   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:48.375256   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:48.428276   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:48.428309   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:48.442545   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:48.442572   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:48.521287   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:48.521314   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:48.521326   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:48.605921   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:48.605965   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:51.148671   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:51.163922   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:51.163993   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:51.205098   65393 cri.go:89] found id: ""
	I0404 22:57:51.205127   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.205137   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:51.205145   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:51.205210   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:51.241238   65393 cri.go:89] found id: ""
	I0404 22:57:51.241265   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.241276   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:51.241287   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:51.241352   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:51.279667   65393 cri.go:89] found id: ""
	I0404 22:57:51.279691   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.279700   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:51.279710   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:51.279767   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:51.325092   65393 cri.go:89] found id: ""
	I0404 22:57:51.325114   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.325121   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:51.325127   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:51.325175   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:51.367212   65393 cri.go:89] found id: ""
	I0404 22:57:51.367233   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.367244   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:51.367252   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:51.367334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:51.407716   65393 cri.go:89] found id: ""
	I0404 22:57:51.407739   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.407747   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:51.407753   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:51.407800   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:51.448021   65393 cri.go:89] found id: ""
	I0404 22:57:51.448050   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.448060   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:51.448066   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:51.448138   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:46.862201   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:49.361167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.365099   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:48.806224   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.306785   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.840934   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.845038   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:51.484815   65393 cri.go:89] found id: ""
	I0404 22:57:51.484841   65393 logs.go:276] 0 containers: []
	W0404 22:57:51.484849   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:51.484857   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:51.484868   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:51.538503   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:51.538530   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:51.552658   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:51.552686   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:51.628261   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:51.628288   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:51.628313   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:51.713890   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:51.713935   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:54.257046   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:54.270797   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:54.270853   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:54.308550   65393 cri.go:89] found id: ""
	I0404 22:57:54.308579   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.308589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:54.308596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:54.308666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:54.347736   65393 cri.go:89] found id: ""
	I0404 22:57:54.347764   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.347773   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:54.347779   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:54.347825   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:54.389179   65393 cri.go:89] found id: ""
	I0404 22:57:54.389209   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.389220   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:54.389227   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:54.389286   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:54.429961   65393 cri.go:89] found id: ""
	I0404 22:57:54.429984   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.429993   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:54.429998   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:54.430053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:54.469383   65393 cri.go:89] found id: ""
	I0404 22:57:54.469406   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.469414   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:54.469419   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:54.469481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:54.506260   65393 cri.go:89] found id: ""
	I0404 22:57:54.506291   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.506302   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:54.506309   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:54.506374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:54.545050   65393 cri.go:89] found id: ""
	I0404 22:57:54.545080   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.545090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:54.545096   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:54.545144   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:54.584895   65393 cri.go:89] found id: ""
	I0404 22:57:54.584932   65393 logs.go:276] 0 containers: []
	W0404 22:57:54.584943   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:54.584955   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:54.584969   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:57:54.637294   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:54.637329   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:54.652792   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:54.652821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:54.729248   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:54.729268   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:54.729286   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:54.810107   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:54.810135   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:53.860177   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.861048   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:53.805077   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:55.806516   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.306075   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:58.341767   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.343121   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:57:57.356688   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:57:57.371841   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:57:57.371901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:57:57.408063   65393 cri.go:89] found id: ""
	I0404 22:57:57.408087   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.408095   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:57:57.408112   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:57:57.408199   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:57:57.444197   65393 cri.go:89] found id: ""
	I0404 22:57:57.444222   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.444231   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:57:57.444241   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:57:57.444291   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:57:57.479502   65393 cri.go:89] found id: ""
	I0404 22:57:57.479528   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.479536   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:57:57.479542   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:57:57.479593   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:57:57.515017   65393 cri.go:89] found id: ""
	I0404 22:57:57.515044   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.515057   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:57:57.515064   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:57:57.515113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:57:57.550568   65393 cri.go:89] found id: ""
	I0404 22:57:57.550595   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.550603   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:57:57.550609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:57:57.550670   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:57:57.587662   65393 cri.go:89] found id: ""
	I0404 22:57:57.587689   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.587697   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:57:57.587703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:57:57.587761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:57:57.624184   65393 cri.go:89] found id: ""
	I0404 22:57:57.624206   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.624213   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:57:57.624219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:57:57.624274   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:57:57.662289   65393 cri.go:89] found id: ""
	I0404 22:57:57.662312   65393 logs.go:276] 0 containers: []
	W0404 22:57:57.662320   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:57:57.662329   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:57:57.662339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:57:57.677703   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:57:57.677728   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:57:57.768164   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:57:57.768191   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:57:57.768206   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.853138   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:57:57.853175   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:57:57.896254   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:57:57.896291   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.448289   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:00.462243   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:00.462306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:00.500146   65393 cri.go:89] found id: ""
	I0404 22:58:00.500180   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.500191   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:00.500199   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:00.500260   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:00.539443   65393 cri.go:89] found id: ""
	I0404 22:58:00.539469   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.539477   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:00.539482   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:00.539532   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:00.582120   65393 cri.go:89] found id: ""
	I0404 22:58:00.582149   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.582160   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:00.582167   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:00.582231   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:00.622269   65393 cri.go:89] found id: ""
	I0404 22:58:00.622291   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.622299   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:00.622305   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:00.622361   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:00.659947   65393 cri.go:89] found id: ""
	I0404 22:58:00.659980   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.659992   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:00.659999   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:00.660053   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:00.695604   65393 cri.go:89] found id: ""
	I0404 22:58:00.695632   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.695642   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:00.695650   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:00.695707   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:00.737962   65393 cri.go:89] found id: ""
	I0404 22:58:00.738044   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.738065   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:00.738075   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:00.738168   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:00.781043   65393 cri.go:89] found id: ""
	I0404 22:58:00.781069   65393 logs.go:276] 0 containers: []
	W0404 22:58:00.781076   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:00.781085   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:00.781096   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:00.828143   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:00.828174   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:00.883483   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:00.883521   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:00.899435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:00.899463   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:00.975258   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:00.975277   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:00.975288   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:57:57.861562   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.362255   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:00.306310   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.805208   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:02.844797   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.341139   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:03.553230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:03.567909   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:03.568005   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:03.608638   65393 cri.go:89] found id: ""
	I0404 22:58:03.608666   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.608675   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:03.608680   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:03.608727   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:03.647698   65393 cri.go:89] found id: ""
	I0404 22:58:03.647726   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.647735   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:03.647741   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:03.647804   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:03.686143   65393 cri.go:89] found id: ""
	I0404 22:58:03.686167   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.686176   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:03.686181   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:03.686230   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:03.723462   65393 cri.go:89] found id: ""
	I0404 22:58:03.723487   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.723494   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:03.723500   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:03.723556   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:03.762299   65393 cri.go:89] found id: ""
	I0404 22:58:03.762332   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.762342   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:03.762350   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:03.762426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:03.798165   65393 cri.go:89] found id: ""
	I0404 22:58:03.798198   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.798215   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:03.798225   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:03.798292   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:03.837607   65393 cri.go:89] found id: ""
	I0404 22:58:03.837636   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.837648   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:03.837655   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:03.837716   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:03.877346   65393 cri.go:89] found id: ""
	I0404 22:58:03.877380   65393 logs.go:276] 0 containers: []
	W0404 22:58:03.877394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:03.877410   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:03.877432   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:03.934033   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:03.934073   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:03.949106   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:03.949131   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:04.022674   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:04.022695   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:04.022707   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:04.101187   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:04.101225   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:02.860519   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:04.861267   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:05.305637   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.804768   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:07.341713   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.841527   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:06.653116   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:06.667790   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:06.667867   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:06.712208   65393 cri.go:89] found id: ""
	I0404 22:58:06.712230   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.712238   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:06.712243   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:06.712289   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:06.748489   65393 cri.go:89] found id: ""
	I0404 22:58:06.748522   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.748533   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:06.748540   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:06.748602   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:06.786720   65393 cri.go:89] found id: ""
	I0404 22:58:06.786745   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.786753   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:06.786758   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:06.786805   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:06.825404   65393 cri.go:89] found id: ""
	I0404 22:58:06.825437   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.825444   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:06.825461   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:06.825515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:06.864851   65393 cri.go:89] found id: ""
	I0404 22:58:06.864879   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.864890   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:06.864898   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:06.864959   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:06.906229   65393 cri.go:89] found id: ""
	I0404 22:58:06.906258   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.906268   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:06.906274   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:06.906327   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:06.946126   65393 cri.go:89] found id: ""
	I0404 22:58:06.946153   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.946164   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:06.946172   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:06.946234   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:06.983746   65393 cri.go:89] found id: ""
	I0404 22:58:06.983779   65393 logs.go:276] 0 containers: []
	W0404 22:58:06.983792   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:06.983805   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:06.983821   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:07.038290   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:07.038330   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:07.053847   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:07.053875   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:07.127453   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:07.127479   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:07.127496   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:07.206638   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:07.206676   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:09.753016   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:09.768850   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:09.768918   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:09.804549   65393 cri.go:89] found id: ""
	I0404 22:58:09.804579   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.804589   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:09.804596   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:09.804653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:09.847299   65393 cri.go:89] found id: ""
	I0404 22:58:09.847323   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.847334   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:09.847341   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:09.847399   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:09.902064   65393 cri.go:89] found id: ""
	I0404 22:58:09.902093   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.902104   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:09.902111   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:09.902171   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:09.953956   65393 cri.go:89] found id: ""
	I0404 22:58:09.953986   65393 logs.go:276] 0 containers: []
	W0404 22:58:09.953997   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:09.954003   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:09.954071   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:10.007853   65393 cri.go:89] found id: ""
	I0404 22:58:10.007884   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.007892   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:10.007897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:10.007954   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:10.046924   65393 cri.go:89] found id: ""
	I0404 22:58:10.046960   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.046970   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:10.046977   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:10.047038   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:10.086851   65393 cri.go:89] found id: ""
	I0404 22:58:10.086878   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.086890   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:10.086896   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:10.086956   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:10.126678   65393 cri.go:89] found id: ""
	I0404 22:58:10.126710   65393 logs.go:276] 0 containers: []
	W0404 22:58:10.126719   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:10.126727   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:10.126741   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:10.142641   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:10.142669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:10.226953   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:10.226978   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:10.226991   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:10.310046   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:10.310078   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:10.356140   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:10.356173   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:06.861772   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.361398   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:11.361942   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:09.806398   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.306003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.341652   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.341848   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.343338   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:12.911501   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:12.924292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:12.924374   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:12.959969   65393 cri.go:89] found id: ""
	I0404 22:58:12.959997   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.960007   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:12.960015   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:12.960064   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:12.998817   65393 cri.go:89] found id: ""
	I0404 22:58:12.998846   65393 logs.go:276] 0 containers: []
	W0404 22:58:12.998856   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:12.998876   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:12.998943   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:13.035295   65393 cri.go:89] found id: ""
	I0404 22:58:13.035326   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.035337   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:13.035343   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:13.035403   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:13.076712   65393 cri.go:89] found id: ""
	I0404 22:58:13.076735   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.076744   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:13.076751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:13.076823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:13.112987   65393 cri.go:89] found id: ""
	I0404 22:58:13.113015   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.113023   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:13.113029   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:13.113092   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:13.155330   65393 cri.go:89] found id: ""
	I0404 22:58:13.155355   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.155366   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:13.155373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:13.155432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:13.194322   65393 cri.go:89] found id: ""
	I0404 22:58:13.194357   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.194368   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:13.194374   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:13.194432   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:13.231939   65393 cri.go:89] found id: ""
	I0404 22:58:13.231969   65393 logs.go:276] 0 containers: []
	W0404 22:58:13.231981   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:13.231993   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:13.232012   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:13.312244   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:13.312278   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:13.356605   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:13.356640   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:13.411760   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:13.411801   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:13.427373   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:13.427397   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:13.509840   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.010230   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:16.023985   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:16.024050   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:16.063038   65393 cri.go:89] found id: ""
	I0404 22:58:16.063062   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.063070   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:16.063075   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:16.063128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:16.100140   65393 cri.go:89] found id: ""
	I0404 22:58:16.100170   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.100182   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:16.100189   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:16.100252   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:16.137319   65393 cri.go:89] found id: ""
	I0404 22:58:16.137354   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.137362   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:16.137373   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:16.137425   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:16.173127   65393 cri.go:89] found id: ""
	I0404 22:58:16.173151   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.173159   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:16.173166   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:16.173212   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:16.213639   65393 cri.go:89] found id: ""
	I0404 22:58:16.213667   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.213676   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:16.213683   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:16.213744   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:16.255272   65393 cri.go:89] found id: ""
	I0404 22:58:16.255301   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.255312   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:16.255320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:16.255381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:16.289359   65393 cri.go:89] found id: ""
	I0404 22:58:16.289387   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.289397   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:16.289404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:16.289466   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:16.326711   65393 cri.go:89] found id: ""
	I0404 22:58:16.326738   65393 logs.go:276] 0 containers: []
	W0404 22:58:16.326748   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:16.326791   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:16.326817   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:16.374754   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:16.374788   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:16.425956   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:16.425994   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:16.440815   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:16.440848   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:58:13.363022   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:15.860774   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:14.306144   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:16.804788   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.844733   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:21.340627   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	W0404 22:58:16.512725   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:16.512750   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:16.512770   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.099694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:19.113559   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:19.113617   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:19.148531   65393 cri.go:89] found id: ""
	I0404 22:58:19.148553   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.148561   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:19.148567   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:19.148627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:19.186728   65393 cri.go:89] found id: ""
	I0404 22:58:19.186750   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.186759   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:19.186764   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:19.186809   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:19.223221   65393 cri.go:89] found id: ""
	I0404 22:58:19.223269   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.223277   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:19.223283   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:19.223350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:19.260459   65393 cri.go:89] found id: ""
	I0404 22:58:19.260492   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.260502   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:19.260509   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:19.260571   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:19.298503   65393 cri.go:89] found id: ""
	I0404 22:58:19.298527   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.298534   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:19.298540   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:19.298603   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:19.339562   65393 cri.go:89] found id: ""
	I0404 22:58:19.339595   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.339605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:19.339613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:19.339674   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:19.379350   65393 cri.go:89] found id: ""
	I0404 22:58:19.379383   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.379394   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:19.379401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:19.379501   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:19.417360   65393 cri.go:89] found id: ""
	I0404 22:58:19.417387   65393 logs.go:276] 0 containers: []
	W0404 22:58:19.417394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:19.417403   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:19.417420   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:19.493267   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:19.493300   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:19.533913   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:19.533948   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:19.585900   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:19.585936   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:19.601225   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:19.601259   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:19.675774   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:17.861139   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.360910   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:18.806558   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:20.807066   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.304681   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:23.342104   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.843695   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:22.176660   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:22.190161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:22.190239   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:22.226569   65393 cri.go:89] found id: ""
	I0404 22:58:22.226601   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.226612   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:22.226621   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:22.226678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:22.263193   65393 cri.go:89] found id: ""
	I0404 22:58:22.263221   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.263232   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:22.263239   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:22.263296   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:22.305585   65393 cri.go:89] found id: ""
	I0404 22:58:22.305613   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.305625   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:22.305632   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:22.305688   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:22.343575   65393 cri.go:89] found id: ""
	I0404 22:58:22.343602   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.343613   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:22.343620   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:22.343675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:22.381394   65393 cri.go:89] found id: ""
	I0404 22:58:22.381423   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.381432   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:22.381438   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:22.381488   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:22.423631   65393 cri.go:89] found id: ""
	I0404 22:58:22.423664   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.423673   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:22.423680   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:22.423755   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:22.462618   65393 cri.go:89] found id: ""
	I0404 22:58:22.462651   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.462662   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:22.462669   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:22.462729   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:22.499448   65393 cri.go:89] found id: ""
	I0404 22:58:22.499473   65393 logs.go:276] 0 containers: []
	W0404 22:58:22.499481   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:22.499490   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:22.499504   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:22.552937   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:22.552976   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:22.568480   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:22.568508   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:22.647552   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:22.647575   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:22.647587   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:22.732328   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:22.732366   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:25.275187   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:25.290575   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:25.290660   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:25.329699   65393 cri.go:89] found id: ""
	I0404 22:58:25.329728   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.329737   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:25.329744   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:25.329808   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:25.368206   65393 cri.go:89] found id: ""
	I0404 22:58:25.368239   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.368250   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:25.368257   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:25.368320   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:25.405444   65393 cri.go:89] found id: ""
	I0404 22:58:25.405490   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.405500   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:25.405508   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:25.405566   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:25.448856   65393 cri.go:89] found id: ""
	I0404 22:58:25.448883   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.448891   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:25.448897   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:25.448952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:25.489308   65393 cri.go:89] found id: ""
	I0404 22:58:25.489340   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.489351   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:25.489358   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:25.489418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:25.532786   65393 cri.go:89] found id: ""
	I0404 22:58:25.532810   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.532820   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:25.532828   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:25.532887   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:25.569370   65393 cri.go:89] found id: ""
	I0404 22:58:25.569400   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.569409   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:25.569428   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:25.569508   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:25.609516   65393 cri.go:89] found id: ""
	I0404 22:58:25.609542   65393 logs.go:276] 0 containers: []
	W0404 22:58:25.609553   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:25.609564   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:25.609579   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:25.661976   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:25.662011   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:25.676718   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:25.676743   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:25.754582   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:25.754612   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:25.754629   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:25.837707   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:25.837751   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:22.367423   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:24.860670   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:25.306162   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:27.306275   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.343146   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.840946   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.381474   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:28.396475   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:28.396549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:28.439207   65393 cri.go:89] found id: ""
	I0404 22:58:28.439239   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.439251   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:28.439259   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:28.439341   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:28.481537   65393 cri.go:89] found id: ""
	I0404 22:58:28.481559   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.481567   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:28.481572   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:28.481622   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:28.522159   65393 cri.go:89] found id: ""
	I0404 22:58:28.522183   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.522194   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:28.522202   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:28.522267   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:28.563587   65393 cri.go:89] found id: ""
	I0404 22:58:28.563623   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.563634   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:28.563641   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:28.563706   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:28.601846   65393 cri.go:89] found id: ""
	I0404 22:58:28.601874   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.601885   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:28.601892   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:28.601971   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:28.639734   65393 cri.go:89] found id: ""
	I0404 22:58:28.639758   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.639765   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:28.639773   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:28.639832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:28.679049   65393 cri.go:89] found id: ""
	I0404 22:58:28.679079   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.679090   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:28.679097   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:28.679152   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:28.721353   65393 cri.go:89] found id: ""
	I0404 22:58:28.721380   65393 logs.go:276] 0 containers: []
	W0404 22:58:28.721390   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:28.721400   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:28.721414   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:28.776618   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:28.776666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:28.792435   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:28.792473   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:28.867190   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:28.867219   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:28.867238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:28.950021   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:28.950056   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:26.861121   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:28.861752   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:30.862006   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:29.805482   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.806982   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:32.843651   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.341399   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:31.496813   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:31.511401   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:31.511462   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:31.552422   65393 cri.go:89] found id: ""
	I0404 22:58:31.552450   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.552458   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:31.552464   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:31.552518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:31.590548   65393 cri.go:89] found id: ""
	I0404 22:58:31.590579   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.590591   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:31.590598   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:31.590653   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:31.626198   65393 cri.go:89] found id: ""
	I0404 22:58:31.626227   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.626238   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:31.626245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:31.626316   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:31.664925   65393 cri.go:89] found id: ""
	I0404 22:58:31.664952   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.664960   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:31.664966   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:31.665017   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:31.703008   65393 cri.go:89] found id: ""
	I0404 22:58:31.703031   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.703038   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:31.703044   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:31.703103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:31.741854   65393 cri.go:89] found id: ""
	I0404 22:58:31.741875   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.741884   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:31.741890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:31.741942   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:31.782573   65393 cri.go:89] found id: ""
	I0404 22:58:31.782603   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.782616   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:31.782624   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:31.782675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:31.819796   65393 cri.go:89] found id: ""
	I0404 22:58:31.819832   65393 logs.go:276] 0 containers: []
	W0404 22:58:31.819844   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:31.819855   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:31.819872   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:31.836362   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:31.836396   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:31.916417   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:31.916438   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:31.916451   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:31.999266   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:31.999302   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:32.048147   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:32.048179   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:34.601683   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:34.618245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:34.618329   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:34.660565   65393 cri.go:89] found id: ""
	I0404 22:58:34.660591   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.660598   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:34.660604   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:34.660654   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:34.696366   65393 cri.go:89] found id: ""
	I0404 22:58:34.696389   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.696397   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:34.696402   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:34.696463   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:34.734234   65393 cri.go:89] found id: ""
	I0404 22:58:34.734281   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.734293   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:34.734300   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:34.734369   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:34.770632   65393 cri.go:89] found id: ""
	I0404 22:58:34.770668   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.770681   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:34.770689   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:34.770752   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:34.808562   65393 cri.go:89] found id: ""
	I0404 22:58:34.808590   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.808600   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:34.808607   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:34.808677   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:34.844177   65393 cri.go:89] found id: ""
	I0404 22:58:34.844209   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.844219   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:34.844228   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:34.844315   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:34.886060   65393 cri.go:89] found id: ""
	I0404 22:58:34.886095   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.886106   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:34.886114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:34.886174   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:34.924731   65393 cri.go:89] found id: ""
	I0404 22:58:34.924759   65393 logs.go:276] 0 containers: []
	W0404 22:58:34.924769   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:34.924781   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:34.924798   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:34.940405   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:34.940437   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:35.016762   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:35.016784   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:35.016800   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:35.096653   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:35.096688   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:35.144213   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:35.144241   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:33.361607   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:35.860598   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:34.307621   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:36.805003   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.341552   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.841619   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:37.702332   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:37.716442   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:37.716515   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:37.755197   65393 cri.go:89] found id: ""
	I0404 22:58:37.755233   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.755244   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:37.755251   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:37.755311   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:37.799011   65393 cri.go:89] found id: ""
	I0404 22:58:37.799036   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.799044   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:37.799048   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:37.799105   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:37.837437   65393 cri.go:89] found id: ""
	I0404 22:58:37.837466   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.837477   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:37.837486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:37.837543   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:37.876014   65393 cri.go:89] found id: ""
	I0404 22:58:37.876085   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.876097   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:37.876104   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:37.876179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:37.915088   65393 cri.go:89] found id: ""
	I0404 22:58:37.915121   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.915132   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:37.915140   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:37.915205   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:37.954991   65393 cri.go:89] found id: ""
	I0404 22:58:37.955028   65393 logs.go:276] 0 containers: []
	W0404 22:58:37.955039   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:37.955047   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:37.955120   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:38.006875   65393 cri.go:89] found id: ""
	I0404 22:58:38.006906   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.006924   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:38.006930   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:38.006989   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:38.044477   65393 cri.go:89] found id: ""
	I0404 22:58:38.044513   65393 logs.go:276] 0 containers: []
	W0404 22:58:38.044541   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:38.044553   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:38.044569   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:38.086425   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:38.086455   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:38.140159   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:38.140195   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:38.156371   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:38.156406   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:38.229011   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:38.229035   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:38.229058   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:40.809399   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:40.824612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:40.824694   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:40.869397   65393 cri.go:89] found id: ""
	I0404 22:58:40.869483   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.869510   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:40.869523   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:40.869583   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:40.911732   65393 cri.go:89] found id: ""
	I0404 22:58:40.911760   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.911782   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:40.911788   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:40.911846   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:40.952165   65393 cri.go:89] found id: ""
	I0404 22:58:40.952193   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.952202   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:40.952209   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:40.952270   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:40.991566   65393 cri.go:89] found id: ""
	I0404 22:58:40.991598   65393 logs.go:276] 0 containers: []
	W0404 22:58:40.991607   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:40.991613   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:40.991661   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:41.033467   65393 cri.go:89] found id: ""
	I0404 22:58:41.033496   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.033505   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:41.033534   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:41.033595   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:41.073350   65393 cri.go:89] found id: ""
	I0404 22:58:41.073395   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.073405   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:41.073410   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:41.073460   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:41.113435   65393 cri.go:89] found id: ""
	I0404 22:58:41.113467   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.113478   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:41.113486   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:41.113549   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:41.152848   65393 cri.go:89] found id: ""
	I0404 22:58:41.152882   65393 logs.go:276] 0 containers: []
	W0404 22:58:41.152892   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:41.152905   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:41.152919   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:41.199001   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:41.199039   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:41.251155   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:41.251200   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:41.268640   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:41.268669   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:41.345101   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:41.345125   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:41.345142   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:37.862623   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:39.865276   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:38.805692   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:40.806509   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.305851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:42.342629   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.841943   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:43.925251   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:43.940719   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:43.940838   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:43.982366   65393 cri.go:89] found id: ""
	I0404 22:58:43.982391   65393 logs.go:276] 0 containers: []
	W0404 22:58:43.982401   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:43.982409   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:43.982477   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:44.026906   65393 cri.go:89] found id: ""
	I0404 22:58:44.026941   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.026952   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:44.026959   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:44.027024   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:44.063914   65393 cri.go:89] found id: ""
	I0404 22:58:44.063940   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.063948   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:44.063954   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:44.064008   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:44.107234   65393 cri.go:89] found id: ""
	I0404 22:58:44.107270   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.107283   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:44.107292   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:44.107388   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:44.144613   65393 cri.go:89] found id: ""
	I0404 22:58:44.144637   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.144658   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:44.144664   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:44.144734   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:44.184821   65393 cri.go:89] found id: ""
	I0404 22:58:44.184858   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.184866   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:44.184872   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:44.184920   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:44.227152   65393 cri.go:89] found id: ""
	I0404 22:58:44.227181   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.227192   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:44.227200   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:44.227262   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:44.266503   65393 cri.go:89] found id: ""
	I0404 22:58:44.266533   65393 logs.go:276] 0 containers: []
	W0404 22:58:44.266544   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:44.266599   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:44.266614   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:44.323524   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:44.323565   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:44.340420   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:44.340456   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:44.441098   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:44.441120   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:44.441137   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:44.554462   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:44.554498   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:42.361529   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:44.362158   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:45.805321   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.805597   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.342230   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.840465   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:47.101901   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:47.116485   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:47.116551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:47.158013   65393 cri.go:89] found id: ""
	I0404 22:58:47.158047   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.158063   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:47.158071   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:47.158136   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:47.194653   65393 cri.go:89] found id: ""
	I0404 22:58:47.194677   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.194688   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:47.194696   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:47.194753   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:47.235406   65393 cri.go:89] found id: ""
	I0404 22:58:47.235435   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.235447   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:47.235456   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:47.235518   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:47.278696   65393 cri.go:89] found id: ""
	I0404 22:58:47.278724   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.278733   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:47.278741   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:47.278832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:47.316839   65393 cri.go:89] found id: ""
	I0404 22:58:47.316871   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.316883   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:47.316890   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:47.316952   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:47.363249   65393 cri.go:89] found id: ""
	I0404 22:58:47.363274   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.363282   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:47.363287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:47.363336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:47.402331   65393 cri.go:89] found id: ""
	I0404 22:58:47.402354   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.402362   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:47.402369   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:47.402429   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:47.441136   65393 cri.go:89] found id: ""
	I0404 22:58:47.441156   65393 logs.go:276] 0 containers: []
	W0404 22:58:47.441163   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:47.441171   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:47.441182   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:47.518956   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:47.518981   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:47.518996   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:47.600303   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:47.600339   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:47.642110   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:47.642138   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:47.694231   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:47.694267   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.209744   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:50.229994   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:50.230052   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:50.299090   65393 cri.go:89] found id: ""
	I0404 22:58:50.299237   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.299257   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:50.299266   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:50.299324   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:50.353476   65393 cri.go:89] found id: ""
	I0404 22:58:50.353506   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.353516   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:50.353524   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:50.353580   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:50.407652   65393 cri.go:89] found id: ""
	I0404 22:58:50.407677   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.407684   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:50.407692   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:50.407775   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:50.445622   65393 cri.go:89] found id: ""
	I0404 22:58:50.445651   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.445658   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:50.445666   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:50.445749   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:50.485764   65393 cri.go:89] found id: ""
	I0404 22:58:50.485791   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.485803   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:50.485810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:50.485891   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:50.525552   65393 cri.go:89] found id: ""
	I0404 22:58:50.525583   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.525601   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:50.525609   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:50.525675   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:50.564452   65393 cri.go:89] found id: ""
	I0404 22:58:50.564477   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.564488   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:50.564498   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:50.564548   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:50.608534   65393 cri.go:89] found id: ""
	I0404 22:58:50.608564   65393 logs.go:276] 0 containers: []
	W0404 22:58:50.608572   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:50.608580   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:50.608592   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:50.686645   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:50.686690   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:50.731681   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:50.731711   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:50.788550   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:50.788593   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:50.804637   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:50.804666   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:50.888452   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:46.860641   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.362015   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:49.808312   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:52.304782   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:51.841258   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.842803   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.341352   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.388937   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:53.403744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:53.403822   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:53.441796   65393 cri.go:89] found id: ""
	I0404 22:58:53.441817   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.441826   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:53.441835   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:53.441899   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:53.486302   65393 cri.go:89] found id: ""
	I0404 22:58:53.486326   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.486335   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:53.486340   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:53.486405   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:53.526429   65393 cri.go:89] found id: ""
	I0404 22:58:53.526455   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.526462   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:53.526467   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:53.526521   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:53.568076   65393 cri.go:89] found id: ""
	I0404 22:58:53.568099   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.568107   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:53.568114   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:53.568185   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:53.608922   65393 cri.go:89] found id: ""
	I0404 22:58:53.608956   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.608964   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:53.608970   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:53.609027   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:53.645604   65393 cri.go:89] found id: ""
	I0404 22:58:53.645635   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.645646   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:53.645658   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:53.645718   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:53.684258   65393 cri.go:89] found id: ""
	I0404 22:58:53.684283   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.684293   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:53.684301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:53.684359   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:53.722616   65393 cri.go:89] found id: ""
	I0404 22:58:53.722647   65393 logs.go:276] 0 containers: []
	W0404 22:58:53.722658   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:53.722671   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:53.722685   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:53.781126   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:53.781162   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:53.796188   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:53.796219   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:53.880536   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:53.880558   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:53.880571   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:53.970199   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:53.970236   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:51.861594   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:53.864468   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.361876   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:54.306294   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.805489   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.341483   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.840493   64902 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:56.510589   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:56.525258   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:56.525336   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:56.563463   65393 cri.go:89] found id: ""
	I0404 22:58:56.563495   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.563506   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:56.563514   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:56.563576   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:56.603330   65393 cri.go:89] found id: ""
	I0404 22:58:56.603355   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.603364   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:56.603369   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:56.603418   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:56.645266   65393 cri.go:89] found id: ""
	I0404 22:58:56.645291   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.645351   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:56.645368   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:56.645426   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:56.691073   65393 cri.go:89] found id: ""
	I0404 22:58:56.691098   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.691108   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:56.691121   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:56.691179   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:56.729687   65393 cri.go:89] found id: ""
	I0404 22:58:56.729726   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.729737   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:56.729744   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:56.729832   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:56.770699   65393 cri.go:89] found id: ""
	I0404 22:58:56.770732   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.770743   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:56.770751   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:56.770815   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:56.808948   65393 cri.go:89] found id: ""
	I0404 22:58:56.808972   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.808982   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:56.808989   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:56.809069   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:56.852467   65393 cri.go:89] found id: ""
	I0404 22:58:56.852490   65393 logs.go:276] 0 containers: []
	W0404 22:58:56.852501   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:56.852511   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:58:56.852526   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:58:56.903115   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:58:56.903147   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:58:56.919521   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:58:56.919550   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:58:56.995265   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:56.995297   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:56.995317   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:58:57.071623   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:58:57.071660   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:58:59.614687   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:58:59.628404   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:58:59.628474   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:58:59.666293   65393 cri.go:89] found id: ""
	I0404 22:58:59.666320   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.666328   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:58:59.666332   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:58:59.666407   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:58:59.706066   65393 cri.go:89] found id: ""
	I0404 22:58:59.706093   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.706104   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:58:59.706111   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:58:59.706166   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:58:59.750467   65393 cri.go:89] found id: ""
	I0404 22:58:59.750493   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.750504   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:58:59.750511   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:58:59.750575   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:58:59.788477   65393 cri.go:89] found id: ""
	I0404 22:58:59.788499   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.788507   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:58:59.788512   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:58:59.788558   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:58:59.829030   65393 cri.go:89] found id: ""
	I0404 22:58:59.829052   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.829062   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:58:59.829069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:58:59.829151   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:58:59.870121   65393 cri.go:89] found id: ""
	I0404 22:58:59.870146   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.870156   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:58:59.870163   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:58:59.870225   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:58:59.912149   65393 cri.go:89] found id: ""
	I0404 22:58:59.912170   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.912178   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:58:59.912185   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:58:59.912245   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:58:59.950867   65393 cri.go:89] found id: ""
	I0404 22:58:59.950903   65393 logs.go:276] 0 containers: []
	W0404 22:58:59.950914   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:58:59.950924   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:58:59.950950   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:00.031828   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:00.031862   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:00.079398   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:00.079425   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:00.128993   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:00.129024   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:00.146214   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:00.146238   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:00.224580   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:58:58.362770   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:00.861231   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:58:58.806527   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.305039   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:03.306465   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:01.845344   64902 pod_ready.go:81] duration metric: took 4m0.011500779s for pod "metrics-server-57f55c9bc5-xwm4m" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:01.845369   64902 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:01.845377   64902 pod_ready.go:38] duration metric: took 4m3.21302807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:01.845392   64902 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:01.845433   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:01.845499   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:01.927539   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:01.927568   64902 cri.go:89] found id: ""
	I0404 22:59:01.927578   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:01.927638   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.933410   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:01.933496   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:01.990735   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:01.990765   64902 cri.go:89] found id: ""
	I0404 22:59:01.990773   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:01.990823   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:01.996039   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:01.996159   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.043251   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:02.043278   64902 cri.go:89] found id: ""
	I0404 22:59:02.043286   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:02.043339   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.048227   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.048300   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.089311   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:02.089334   64902 cri.go:89] found id: ""
	I0404 22:59:02.089344   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:02.089400   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.094466   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.094531   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.146624   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:02.146646   64902 cri.go:89] found id: ""
	I0404 22:59:02.146653   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:02.146711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.151408   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.151491   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.194337   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:02.194361   64902 cri.go:89] found id: ""
	I0404 22:59:02.194370   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:02.194423   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.199225   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:02.199290   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:02.240467   64902 cri.go:89] found id: ""
	I0404 22:59:02.240494   64902 logs.go:276] 0 containers: []
	W0404 22:59:02.240505   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:02.240511   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:02.240572   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:02.284235   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:02.284261   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.284265   64902 cri.go:89] found id: ""
	I0404 22:59:02.284272   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:02.284337   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.289673   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:02.294519   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:02.294542   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:02.335250   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:02.335274   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:02.903414   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:02.903453   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:02.959171   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:02.959205   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:03.121608   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:03.121639   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:03.178477   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:03.178513   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:03.220790   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:03.220827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:03.268659   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:03.268691   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:03.349809   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:03.349853   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:03.397296   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.397325   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:03.450216   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.450242   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.467583   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:03.467610   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:03.525777   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:03.525816   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.077111   64902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:06.097631   64902 api_server.go:72] duration metric: took 4m15.180038628s to wait for apiserver process to appear ...
	I0404 22:59:06.097660   64902 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:06.097705   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:06.097767   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:06.146987   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:06.147013   64902 cri.go:89] found id: ""
	I0404 22:59:06.147023   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:06.147083   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.153474   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:06.153549   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:06.204491   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.204515   64902 cri.go:89] found id: ""
	I0404 22:59:06.204522   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:06.204576   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.209689   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:06.209768   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.248698   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:06.248730   64902 cri.go:89] found id: ""
	I0404 22:59:06.248741   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:06.248803   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.254268   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.254362   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.301004   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.301024   64902 cri.go:89] found id: ""
	I0404 22:59:06.301034   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:06.301093   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.306557   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.306625   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.350111   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:06.350136   64902 cri.go:89] found id: ""
	I0404 22:59:06.350146   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:06.350205   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.355488   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.355574   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:02.724888   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:02.738926   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:02.738986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:02.780451   65393 cri.go:89] found id: ""
	I0404 22:59:02.780475   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.780486   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:02.780493   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:02.780551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:02.825725   65393 cri.go:89] found id: ""
	I0404 22:59:02.825745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.825753   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:02.825758   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:02.825806   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:02.866717   65393 cri.go:89] found id: ""
	I0404 22:59:02.866745   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.866752   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:02.866757   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:02.866803   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:02.909020   65393 cri.go:89] found id: ""
	I0404 22:59:02.909040   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.909048   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:02.909053   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:02.909103   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:02.951031   65393 cri.go:89] found id: ""
	I0404 22:59:02.951055   65393 logs.go:276] 0 containers: []
	W0404 22:59:02.951064   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:02.951069   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:02.951128   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:03.000274   65393 cri.go:89] found id: ""
	I0404 22:59:03.000304   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.000315   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:03.000322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:03.000385   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:03.041766   65393 cri.go:89] found id: ""
	I0404 22:59:03.041797   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.041807   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:03.041814   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:03.041871   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:03.086600   65393 cri.go:89] found id: ""
	I0404 22:59:03.086623   65393 logs.go:276] 0 containers: []
	W0404 22:59:03.086631   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:03.086639   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:03.086654   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:03.145868   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:03.145902   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:03.164345   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:03.164373   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:03.239295   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:03.239331   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:03.239347   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:03.337429   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:03.337471   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:05.885881   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:05.899569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:05.899627   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:05.939067   65393 cri.go:89] found id: ""
	I0404 22:59:05.939090   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.939097   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:05.939104   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:05.939163   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:05.977410   65393 cri.go:89] found id: ""
	I0404 22:59:05.977434   65393 logs.go:276] 0 containers: []
	W0404 22:59:05.977441   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:05.977447   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:05.977492   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:06.018127   65393 cri.go:89] found id: ""
	I0404 22:59:06.018149   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.018156   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:06.018161   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:06.018211   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:06.057280   65393 cri.go:89] found id: ""
	I0404 22:59:06.057316   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.057327   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:06.057334   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:06.057396   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:06.095220   65393 cri.go:89] found id: ""
	I0404 22:59:06.095246   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.095255   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:06.095262   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:06.095334   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:06.136192   65393 cri.go:89] found id: ""
	I0404 22:59:06.136291   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.136310   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:06.136320   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.136381   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.193307   65393 cri.go:89] found id: ""
	I0404 22:59:06.193336   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.193347   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.193355   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:06.193415   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:06.233527   65393 cri.go:89] found id: ""
	I0404 22:59:06.233558   65393 logs.go:276] 0 containers: []
	W0404 22:59:06.233566   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:06.233574   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.233585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:06.320567   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.320602   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.363687   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:06.363718   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:06.423209   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:06.423246   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:06.437978   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:06.438009   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:02.862485   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.360827   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:05.805057   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:07.805584   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:06.405660   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.405683   64902 cri.go:89] found id: ""
	I0404 22:59:06.405693   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:06.405758   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.410717   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:06.410794   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:06.462354   64902 cri.go:89] found id: ""
	I0404 22:59:06.462386   64902 logs.go:276] 0 containers: []
	W0404 22:59:06.462398   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:06.462404   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:06.462452   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:06.511014   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.511038   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.511044   64902 cri.go:89] found id: ""
	I0404 22:59:06.511052   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:06.511110   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.517858   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:06.522766   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:06.522794   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:06.576654   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:06.576689   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:06.623256   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:06.623286   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:06.678337   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:06.678369   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:06.722261   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:06.722291   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:06.762151   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:06.762184   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:06.814956   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:06.814983   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:07.273914   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:07.273975   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:07.328704   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:07.328745   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:07.344734   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:07.344765   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:07.473031   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:07.473067   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:07.523879   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:07.523922   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:07.569734   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:07.569793   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.109972   64902 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0404 22:59:10.115439   64902 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0404 22:59:10.117026   64902 api_server.go:141] control plane version: v1.29.3
	I0404 22:59:10.117051   64902 api_server.go:131] duration metric: took 4.019378057s to wait for apiserver health ...
	I0404 22:59:10.117059   64902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:10.117084   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:10.117138   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:10.161095   64902 cri.go:89] found id: "31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.161113   64902 cri.go:89] found id: ""
	I0404 22:59:10.161120   64902 logs.go:276] 1 containers: [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef]
	I0404 22:59:10.161167   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.165630   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:10.165694   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:10.204636   64902 cri.go:89] found id: "ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.204655   64902 cri.go:89] found id: ""
	I0404 22:59:10.204662   64902 logs.go:276] 1 containers: [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c]
	I0404 22:59:10.204711   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.209645   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:10.209721   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:10.256830   64902 cri.go:89] found id: "712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:10.256856   64902 cri.go:89] found id: ""
	I0404 22:59:10.256866   64902 logs.go:276] 1 containers: [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429]
	I0404 22:59:10.256917   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.261699   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:10.261763   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:10.304897   64902 cri.go:89] found id: "46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.304914   64902 cri.go:89] found id: ""
	I0404 22:59:10.304922   64902 logs.go:276] 1 containers: [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88]
	I0404 22:59:10.304976   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.310884   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:10.310961   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:10.349724   64902 cri.go:89] found id: "27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:10.349745   64902 cri.go:89] found id: ""
	I0404 22:59:10.349754   64902 logs.go:276] 1 containers: [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664]
	I0404 22:59:10.349811   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.354588   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:10.354643   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:10.399066   64902 cri.go:89] found id: "58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.399087   64902 cri.go:89] found id: ""
	I0404 22:59:10.399113   64902 logs.go:276] 1 containers: [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b]
	I0404 22:59:10.399160   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.404698   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:10.404771   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:10.454142   64902 cri.go:89] found id: ""
	I0404 22:59:10.454173   64902 logs.go:276] 0 containers: []
	W0404 22:59:10.454183   64902 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:10.454189   64902 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:10.454347   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:10.503594   64902 cri.go:89] found id: "634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.503620   64902 cri.go:89] found id: "6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.503624   64902 cri.go:89] found id: ""
	I0404 22:59:10.503633   64902 logs.go:276] 2 containers: [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a]
	I0404 22:59:10.503696   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.510223   64902 ssh_runner.go:195] Run: which crictl
	I0404 22:59:10.515081   64902 logs.go:123] Gathering logs for kube-apiserver [31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef] ...
	I0404 22:59:10.515102   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31cb759c8e7bc9b49f60d5041c3c93beec514f15ac7618a64af2061eb22635ef"
	I0404 22:59:10.571927   64902 logs.go:123] Gathering logs for etcd [ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c] ...
	I0404 22:59:10.571965   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd813ae02e84f5d975edb17b76ad2b926ebee7f994381a68c62d8311b71f0c"
	I0404 22:59:10.625355   64902 logs.go:123] Gathering logs for kube-scheduler [46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88] ...
	I0404 22:59:10.625391   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46137dbe2189dfc6c073d4cd67616a05c3859669cbf23a2aa0e8ee59a617ec88"
	I0404 22:59:10.669033   64902 logs.go:123] Gathering logs for kube-controller-manager [58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b] ...
	I0404 22:59:10.669061   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b9430fea2e86042831c43dee7d2eaa33565d02373b2d6a856defc1434acb0b"
	I0404 22:59:10.729035   64902 logs.go:123] Gathering logs for storage-provisioner [634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091] ...
	I0404 22:59:10.729070   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634138d6bde20a6fd1432e553cd644094bc415be9022db30fc4e937d5c663091"
	I0404 22:59:10.778108   64902 logs.go:123] Gathering logs for storage-provisioner [6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a] ...
	I0404 22:59:10.778138   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c047a719f15576de408280062976787c4305eb2ed5c9f7f368dedf09164300a"
	I0404 22:59:10.828328   64902 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:10.828355   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:10.885732   64902 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:10.885784   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:10.905718   64902 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:10.905759   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.284079   64902 logs.go:123] Gathering logs for container status ...
	I0404 22:59:11.284115   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:11.328982   64902 logs.go:123] Gathering logs for kube-proxy [27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664] ...
	I0404 22:59:11.329013   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fc077394a7d8b529818359fa1f0b25de2e969c59467ce63590d1fb894a6664"
	I0404 22:59:11.372384   64902 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:11.372415   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:06.524620   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:09.025228   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:09.040219   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:09.040306   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:09.077397   65393 cri.go:89] found id: ""
	I0404 22:59:09.077428   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.077439   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:09.077447   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:09.077530   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:09.115279   65393 cri.go:89] found id: ""
	I0404 22:59:09.115309   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.115319   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:09.115326   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:09.115391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:09.156338   65393 cri.go:89] found id: ""
	I0404 22:59:09.156367   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.156375   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:09.156381   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:09.156444   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:09.199281   65393 cri.go:89] found id: ""
	I0404 22:59:09.199310   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.199319   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:09.199325   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:09.199377   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:09.239845   65393 cri.go:89] found id: ""
	I0404 22:59:09.239870   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.239878   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:09.239883   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:09.239944   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:09.285520   65393 cri.go:89] found id: ""
	I0404 22:59:09.285551   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.285562   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:09.285569   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:09.285635   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:09.322005   65393 cri.go:89] found id: ""
	I0404 22:59:09.322033   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.322043   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:09.322050   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:09.322113   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:09.357356   65393 cri.go:89] found id: ""
	I0404 22:59:09.357384   65393 logs.go:276] 0 containers: []
	W0404 22:59:09.357394   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:09.357404   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:09.357419   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:09.437353   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:09.437389   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:09.480066   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:09.480095   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:09.534394   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:09.534433   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:09.548926   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:09.548951   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:09.623970   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:07.363250   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.860037   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:09.806287   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:12.306720   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:11.521189   64902 logs.go:123] Gathering logs for coredns [712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429] ...
	I0404 22:59:11.521222   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 712b227f7cfb067167b5e2b0e633ad71cec6ce4e4ce85dd669b7193cac2b0429"
	I0404 22:59:14.076828   64902 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:14.076859   64902 system_pods.go:61] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.076866   64902 system_pods.go:61] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.076872   64902 system_pods.go:61] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.076878   64902 system_pods.go:61] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.076882   64902 system_pods.go:61] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.076888   64902 system_pods.go:61] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.076895   64902 system_pods.go:61] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.076901   64902 system_pods.go:61] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.076911   64902 system_pods.go:74] duration metric: took 3.959845225s to wait for pod list to return data ...
	I0404 22:59:14.076920   64902 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:14.080018   64902 default_sa.go:45] found service account: "default"
	I0404 22:59:14.080052   64902 default_sa.go:55] duration metric: took 3.124198ms for default service account to be created ...
	I0404 22:59:14.080063   64902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:14.085812   64902 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:14.085841   64902 system_pods.go:89] "coredns-76f75df574-9qh9s" [3adbc1cb-cb87-4593-a183-a9a14cb8ad5b] Running
	I0404 22:59:14.085847   64902 system_pods.go:89] "etcd-embed-certs-143118" [ee0e1343-6d07-4a7b-9afd-91bedd259700] Running
	I0404 22:59:14.085851   64902 system_pods.go:89] "kube-apiserver-embed-certs-143118" [bfd5768c-8887-41d5-ab08-616e26e70e82] Running
	I0404 22:59:14.085855   64902 system_pods.go:89] "kube-controller-manager-embed-certs-143118" [a1bf2bcc-c8f6-4169-8f8f-1e1bfc252da4] Running
	I0404 22:59:14.085859   64902 system_pods.go:89] "kube-proxy-psst7" [3c2e8cdd-06fb-454a-97a2-7b0764ed0a9a] Running
	I0404 22:59:14.085863   64902 system_pods.go:89] "kube-scheduler-embed-certs-143118" [74aec9ea-4694-40b0-9c10-e1370c62f59c] Running
	I0404 22:59:14.085871   64902 system_pods.go:89] "metrics-server-57f55c9bc5-xwm4m" [1e43f30f-7be7-4083-8d39-eb482e5127a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:14.085875   64902 system_pods.go:89] "storage-provisioner" [3faa390d-3660-4f7d-a20c-e36ee00f2863] Running
	I0404 22:59:14.085882   64902 system_pods.go:126] duration metric: took 5.81489ms to wait for k8s-apps to be running ...
	I0404 22:59:14.085889   64902 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:14.085933   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:14.106253   64902 system_svc.go:56] duration metric: took 20.352553ms WaitForService to wait for kubelet
	I0404 22:59:14.106295   64902 kubeadm.go:576] duration metric: took 4m23.188703249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:14.106319   64902 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:14.110333   64902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:14.110359   64902 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:14.110373   64902 node_conditions.go:105] duration metric: took 4.048469ms to run NodePressure ...
	I0404 22:59:14.110389   64902 start.go:240] waiting for startup goroutines ...
	I0404 22:59:14.110399   64902 start.go:245] waiting for cluster config update ...
	I0404 22:59:14.110412   64902 start.go:254] writing updated cluster config ...
	I0404 22:59:14.110736   64902 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:14.160959   64902 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 22:59:14.164129   64902 out.go:177] * Done! kubectl is now configured to use "embed-certs-143118" cluster and "default" namespace by default
	I0404 22:59:12.124508   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:12.139786   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:12.139862   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:12.179841   65393 cri.go:89] found id: ""
	I0404 22:59:12.179864   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.179872   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:12.179877   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:12.179934   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:12.217232   65393 cri.go:89] found id: ""
	I0404 22:59:12.217260   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.217270   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:12.217277   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:12.217333   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:12.258875   65393 cri.go:89] found id: ""
	I0404 22:59:12.258905   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.258917   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:12.258927   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:12.258990   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:12.302455   65393 cri.go:89] found id: ""
	I0404 22:59:12.302493   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.302508   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:12.302516   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:12.302581   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:12.345264   65393 cri.go:89] found id: ""
	I0404 22:59:12.345298   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.345310   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:12.345322   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:12.345386   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:12.383782   65393 cri.go:89] found id: ""
	I0404 22:59:12.383805   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.383814   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:12.383820   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:12.383881   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:12.421738   65393 cri.go:89] found id: ""
	I0404 22:59:12.421767   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.421777   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:12.421784   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:12.421844   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:12.460348   65393 cri.go:89] found id: ""
	I0404 22:59:12.460379   65393 logs.go:276] 0 containers: []
	W0404 22:59:12.460391   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:12.460407   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:12.460424   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:12.516043   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:12.516081   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:12.531557   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:12.531585   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:12.603052   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:12.603080   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:12.603098   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:12.689033   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:12.689069   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.253621   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:15.268084   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:15.268162   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:15.306888   65393 cri.go:89] found id: ""
	I0404 22:59:15.306913   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.306922   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:15.306929   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:15.306986   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:15.345169   65393 cri.go:89] found id: ""
	I0404 22:59:15.345203   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.345214   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:15.345221   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:15.345279   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:15.381835   65393 cri.go:89] found id: ""
	I0404 22:59:15.381863   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.381874   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:15.381881   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:15.381941   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:15.418221   65393 cri.go:89] found id: ""
	I0404 22:59:15.418247   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.418254   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:15.418259   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:15.418302   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:15.456658   65393 cri.go:89] found id: ""
	I0404 22:59:15.456684   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.456696   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:15.456703   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:15.456761   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:15.498325   65393 cri.go:89] found id: ""
	I0404 22:59:15.498349   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.498359   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:15.498367   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:15.498443   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:15.538694   65393 cri.go:89] found id: ""
	I0404 22:59:15.538723   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.538731   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:15.538738   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:15.538796   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:15.575615   65393 cri.go:89] found id: ""
	I0404 22:59:15.575642   65393 logs.go:276] 0 containers: []
	W0404 22:59:15.575650   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:15.575660   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:15.575672   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:15.616824   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:15.616851   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:15.670897   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:15.670945   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:15.688394   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:15.688429   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:15.764184   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:15.764207   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:15.764222   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:11.860993   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:13.861520   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:16.361055   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:14.809060   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:17.308968   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:18.346181   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:18.361390   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:18.361465   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:18.410432   65393 cri.go:89] found id: ""
	I0404 22:59:18.410463   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.410474   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:18.410482   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:18.410547   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:18.449280   65393 cri.go:89] found id: ""
	I0404 22:59:18.449309   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.449317   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:18.449322   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:18.449380   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:18.508387   65393 cri.go:89] found id: ""
	I0404 22:59:18.508411   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.508420   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:18.508425   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:18.508481   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:18.555469   65393 cri.go:89] found id: ""
	I0404 22:59:18.555492   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.555501   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:18.555506   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:18.555551   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:18.592206   65393 cri.go:89] found id: ""
	I0404 22:59:18.592231   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.592239   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:18.592245   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:18.592294   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:18.628850   65393 cri.go:89] found id: ""
	I0404 22:59:18.628890   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.628900   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:18.628908   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:18.628968   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:18.667507   65393 cri.go:89] found id: ""
	I0404 22:59:18.667543   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.667556   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:18.667564   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:18.667630   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:18.706367   65393 cri.go:89] found id: ""
	I0404 22:59:18.706392   65393 logs.go:276] 0 containers: []
	W0404 22:59:18.706410   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:18.706422   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:18.706438   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:18.761069   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:18.761108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:18.777164   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:18.777204   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:18.861741   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:18.861769   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:18.861782   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:18.948064   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:18.948108   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:18.361239   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:20.362324   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:19.805576   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.805654   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:21.497977   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:21.511810   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:21.511901   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:21.548151   65393 cri.go:89] found id: ""
	I0404 22:59:21.548177   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.548188   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:21.548196   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:21.548241   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:21.587395   65393 cri.go:89] found id: ""
	I0404 22:59:21.587420   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.587436   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:21.587441   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:21.587507   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:21.624280   65393 cri.go:89] found id: ""
	I0404 22:59:21.624312   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.624322   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:21.624330   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:21.624391   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:21.664557   65393 cri.go:89] found id: ""
	I0404 22:59:21.664583   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.664593   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:21.664600   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:21.664666   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:21.705570   65393 cri.go:89] found id: ""
	I0404 22:59:21.705601   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.705614   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:21.705622   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:21.705683   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:21.744722   65393 cri.go:89] found id: ""
	I0404 22:59:21.744755   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.744764   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:21.744770   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:21.744831   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:21.784997   65393 cri.go:89] found id: ""
	I0404 22:59:21.785036   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.785047   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:21.785054   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:21.785117   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:21.825404   65393 cri.go:89] found id: ""
	I0404 22:59:21.825428   65393 logs.go:276] 0 containers: []
	W0404 22:59:21.825435   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:21.825443   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:21.825467   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:21.880421   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:21.880470   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:21.898337   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:21.898367   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:21.987201   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:21.987233   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:21.987249   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.070135   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:22.070176   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:24.613694   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:24.627661   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:24.627823   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:24.667552   65393 cri.go:89] found id: ""
	I0404 22:59:24.667580   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.667594   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 22:59:24.667601   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:24.667663   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:24.705866   65393 cri.go:89] found id: ""
	I0404 22:59:24.705888   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.705897   65393 logs.go:278] No container was found matching "etcd"
	I0404 22:59:24.705905   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:24.705975   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:24.746920   65393 cri.go:89] found id: ""
	I0404 22:59:24.746948   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.746959   65393 logs.go:278] No container was found matching "coredns"
	I0404 22:59:24.746967   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:24.747021   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:24.792236   65393 cri.go:89] found id: ""
	I0404 22:59:24.792259   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.792270   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 22:59:24.792281   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:24.792340   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:24.832066   65393 cri.go:89] found id: ""
	I0404 22:59:24.832096   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.832107   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 22:59:24.832133   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:24.832207   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:24.873568   65393 cri.go:89] found id: ""
	I0404 22:59:24.873594   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.873605   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 22:59:24.873612   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:24.873678   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:24.922719   65393 cri.go:89] found id: ""
	I0404 22:59:24.922743   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.922750   65393 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:24.922756   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 22:59:24.922801   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 22:59:24.981180   65393 cri.go:89] found id: ""
	I0404 22:59:24.981229   65393 logs.go:276] 0 containers: []
	W0404 22:59:24.981243   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 22:59:24.981255   65393 logs.go:123] Gathering logs for container status ...
	I0404 22:59:24.981272   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:25.039695   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:25.039735   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:25.097992   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:25.098037   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:25.113941   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:25.113970   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 22:59:25.185615   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 22:59:25.185643   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:25.185659   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:22.362720   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.363009   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:24.305260   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:26.805336   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:27.772867   65393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:27.787478   65393 kubeadm.go:591] duration metric: took 4m3.182360219s to restartPrimaryControlPlane
	W0404 22:59:27.787560   65393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:27.787594   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 22:59:30.083285   65393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.29566411s)
	I0404 22:59:30.083364   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:30.099547   65393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 22:59:30.110792   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 22:59:30.123094   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 22:59:30.123110   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 22:59:30.123152   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 22:59:30.133535   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 22:59:30.133596   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 22:59:30.144194   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 22:59:30.154411   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 22:59:30.154476   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 22:59:30.164648   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.174227   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 22:59:30.174292   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 22:59:30.184396   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 22:59:30.194311   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 22:59:30.194370   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 22:59:30.204463   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 22:59:30.284881   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 22:59:30.285065   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 22:59:30.439256   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 22:59:30.439379   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 22:59:30.439558   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 22:59:30.640320   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 22:59:30.642667   65393 out.go:204]   - Generating certificates and keys ...
	I0404 22:59:30.642787   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 22:59:30.642883   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 22:59:30.643858   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 22:59:30.643964   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 22:59:30.644068   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 22:59:30.644183   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 22:59:30.644914   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 22:59:30.646151   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 22:59:30.646413   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 22:59:30.646975   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 22:59:30.647036   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 22:59:30.647163   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 22:59:30.818578   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 22:59:31.002928   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 22:59:31.300200   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 22:59:26.861545   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:29.361333   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.508251   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 22:59:31.525515   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 22:59:31.527679   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 22:59:31.527773   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 22:59:31.680829   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 22:59:28.806960   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.305235   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:33.305735   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:31.682802   65393 out.go:204]   - Booting up control plane ...
	I0404 22:59:31.682939   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 22:59:31.684100   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 22:59:31.685552   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 22:59:31.686931   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 22:59:31.689241   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 22:59:31.863885   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:34.361582   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:36.362733   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:35.810851   65047 pod_ready.go:102] pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:37.305033   65047 pod_ready.go:81] duration metric: took 4m0.006524977s for pod "metrics-server-569cc877fc-5q4ff" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:37.305064   65047 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0404 22:59:37.305072   65047 pod_ready.go:38] duration metric: took 4m5.047389638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:37.305095   65047 api_server.go:52] waiting for apiserver process to appear ...
	I0404 22:59:37.305121   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:37.305167   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:37.365002   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:37.365022   65047 cri.go:89] found id: ""
	I0404 22:59:37.365029   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:37.365079   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.370431   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:37.370490   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:37.411461   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:37.411489   65047 cri.go:89] found id: ""
	I0404 22:59:37.411498   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:37.411546   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.416214   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:37.416280   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:37.467470   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:37.467500   65047 cri.go:89] found id: ""
	I0404 22:59:37.467510   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:37.467565   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.472332   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:37.472394   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:37.511792   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:37.511815   65047 cri.go:89] found id: ""
	I0404 22:59:37.511821   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:37.511870   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.516458   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:37.516514   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:37.556843   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:37.556869   65047 cri.go:89] found id: ""
	I0404 22:59:37.556880   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:37.556941   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.561556   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:37.561617   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:37.601741   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.601764   65047 cri.go:89] found id: ""
	I0404 22:59:37.601775   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:37.601831   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.606376   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:37.606449   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:37.647116   65047 cri.go:89] found id: ""
	I0404 22:59:37.647139   65047 logs.go:276] 0 containers: []
	W0404 22:59:37.647146   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:37.647151   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:37.647211   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:37.694580   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.694603   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:37.694607   65047 cri.go:89] found id: ""
	I0404 22:59:37.694614   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:37.694662   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.699109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:37.703776   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:37.703802   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:37.758969   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:37.759001   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:37.808316   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:37.808339   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:37.873353   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:37.873388   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:37.891256   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:37.891290   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:38.047292   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:38.047323   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:38.104845   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:38.104881   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:38.180173   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:38.180209   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:38.225152   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:38.225185   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:38.275621   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:38.275647   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:38.861119   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:40.862057   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:38.791198   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:38.791239   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:38.838995   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:38.839032   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:38.889944   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:38.889980   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.437490   65047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:59:41.454599   65047 api_server.go:72] duration metric: took 4m14.969220816s to wait for apiserver process to appear ...
	I0404 22:59:41.454641   65047 api_server.go:88] waiting for apiserver healthz status ...
	I0404 22:59:41.454676   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:41.454729   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:41.496719   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.496740   65047 cri.go:89] found id: ""
	I0404 22:59:41.496747   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:41.496790   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.501869   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:41.501949   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:41.548019   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:41.548038   65047 cri.go:89] found id: ""
	I0404 22:59:41.548047   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:41.548105   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.552683   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:41.552743   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:41.594018   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.594046   65047 cri.go:89] found id: ""
	I0404 22:59:41.594054   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:41.594109   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.598612   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:41.598670   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:41.640618   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:41.640644   65047 cri.go:89] found id: ""
	I0404 22:59:41.640654   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:41.640711   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.645201   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:41.645271   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:41.691378   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.691401   65047 cri.go:89] found id: ""
	I0404 22:59:41.691410   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:41.691465   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.696359   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:41.696421   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:41.738169   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:41.738199   65047 cri.go:89] found id: ""
	I0404 22:59:41.738208   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:41.738261   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.742769   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:41.742844   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:41.782136   65047 cri.go:89] found id: ""
	I0404 22:59:41.782163   65047 logs.go:276] 0 containers: []
	W0404 22:59:41.782175   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:41.782181   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:41.782244   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:41.825698   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:41.825717   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:41.825721   65047 cri.go:89] found id: ""
	I0404 22:59:41.825728   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:41.825773   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.834332   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:41.840251   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:41.840280   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:41.914817   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:41.914856   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:41.956375   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:41.956401   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:41.999930   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:41.999960   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:42.067118   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:42.067148   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:42.104788   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:42.104818   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:42.542407   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:42.542444   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:42.596923   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:42.596957   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:42.613545   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:42.613571   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:42.732728   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:42.732756   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:42.801975   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:42.802010   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:42.844728   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:42.844757   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:42.897576   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:42.897602   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:42.863167   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.360824   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:45.448107   65047 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0404 22:59:45.453565   65047 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0404 22:59:45.454921   65047 api_server.go:141] control plane version: v1.30.0-rc.0
	I0404 22:59:45.454945   65047 api_server.go:131] duration metric: took 4.000295856s to wait for apiserver health ...
	I0404 22:59:45.454955   65047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 22:59:45.454985   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 22:59:45.455041   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 22:59:45.499810   65047 cri.go:89] found id: "ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:45.499841   65047 cri.go:89] found id: ""
	I0404 22:59:45.499849   65047 logs.go:276] 1 containers: [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38]
	I0404 22:59:45.499903   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.506696   65047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 22:59:45.506797   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 22:59:45.549063   65047 cri.go:89] found id: "edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:45.549087   65047 cri.go:89] found id: ""
	I0404 22:59:45.549094   65047 logs.go:276] 1 containers: [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513]
	I0404 22:59:45.549135   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.553601   65047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 22:59:45.553663   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 22:59:45.605961   65047 cri.go:89] found id: "b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:45.605989   65047 cri.go:89] found id: ""
	I0404 22:59:45.605999   65047 logs.go:276] 1 containers: [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff]
	I0404 22:59:45.606057   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.610424   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 22:59:45.610496   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 22:59:45.651488   65047 cri.go:89] found id: "d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:45.651520   65047 cri.go:89] found id: ""
	I0404 22:59:45.651530   65047 logs.go:276] 1 containers: [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915]
	I0404 22:59:45.651589   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.655935   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 22:59:45.656005   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 22:59:45.694207   65047 cri.go:89] found id: "fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:45.694225   65047 cri.go:89] found id: ""
	I0404 22:59:45.694235   65047 logs.go:276] 1 containers: [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451]
	I0404 22:59:45.694290   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.698466   65047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 22:59:45.698520   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 22:59:45.736294   65047 cri.go:89] found id: "06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:45.736318   65047 cri.go:89] found id: ""
	I0404 22:59:45.736326   65047 logs.go:276] 1 containers: [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904]
	I0404 22:59:45.736389   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.741126   65047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 22:59:45.741200   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 22:59:45.783225   65047 cri.go:89] found id: ""
	I0404 22:59:45.783254   65047 logs.go:276] 0 containers: []
	W0404 22:59:45.783265   65047 logs.go:278] No container was found matching "kindnet"
	I0404 22:59:45.783271   65047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0404 22:59:45.783332   65047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0404 22:59:45.823086   65047 cri.go:89] found id: "11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:45.823110   65047 cri.go:89] found id: "608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:45.823116   65047 cri.go:89] found id: ""
	I0404 22:59:45.823126   65047 logs.go:276] 2 containers: [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889]
	I0404 22:59:45.823182   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.827497   65047 ssh_runner.go:195] Run: which crictl
	I0404 22:59:45.832389   65047 logs.go:123] Gathering logs for kubelet ...
	I0404 22:59:45.832416   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 22:59:45.892925   65047 logs.go:123] Gathering logs for describe nodes ...
	I0404 22:59:45.892967   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0404 22:59:46.018980   65047 logs.go:123] Gathering logs for kube-apiserver [ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38] ...
	I0404 22:59:46.019011   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecfe112abbd47ff172b717e3fb825f6b6d97ab6d636829c1e462aadd1fcffa38"
	I0404 22:59:46.083405   65047 logs.go:123] Gathering logs for etcd [edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513] ...
	I0404 22:59:46.083438   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edeb6b8feb7b16e5cdc944224f71f55dba87bbb37ce92b244f6eb90c7cba3513"
	I0404 22:59:46.144135   65047 logs.go:123] Gathering logs for coredns [b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff] ...
	I0404 22:59:46.144169   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b193f00fa4600063a4458cb4766374dd0d6b17f5977f8ad877eafbc63834fcff"
	I0404 22:59:46.190770   65047 logs.go:123] Gathering logs for kube-controller-manager [06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904] ...
	I0404 22:59:46.190803   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06183daed52cd81565f60e9c744edca6c9640e0b0f51d94ffbf4300d41eb0904"
	I0404 22:59:46.257768   65047 logs.go:123] Gathering logs for dmesg ...
	I0404 22:59:46.257801   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 22:59:46.275102   65047 logs.go:123] Gathering logs for kube-scheduler [d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915] ...
	I0404 22:59:46.275127   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3b7424b0efb3a01489a80f0544472c536d19616ac5e2d01f5cc609929210915"
	I0404 22:59:46.312291   65047 logs.go:123] Gathering logs for kube-proxy [fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451] ...
	I0404 22:59:46.312318   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb4517a71e257bd99a2d305ed9cc7832bb4f0f7b4d2c18f601d30f18ef0f7451"
	I0404 22:59:46.351086   65047 logs.go:123] Gathering logs for storage-provisioner [11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee] ...
	I0404 22:59:46.351112   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c58a1830991f9cd71ffea67d47e662a619f7c08f89a6e3e1b7a24bbdec19ee"
	I0404 22:59:46.393478   65047 logs.go:123] Gathering logs for storage-provisioner [608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889] ...
	I0404 22:59:46.393506   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 608d21b5e121f1afe87f9b379b2c3b4f37de4ec2ad93f27c7a1c305064911889"
	I0404 22:59:46.438745   65047 logs.go:123] Gathering logs for CRI-O ...
	I0404 22:59:46.438778   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 22:59:46.822672   65047 logs.go:123] Gathering logs for container status ...
	I0404 22:59:46.822706   65047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0404 22:59:49.386090   65047 system_pods.go:59] 8 kube-system pods found
	I0404 22:59:49.386122   65047 system_pods.go:61] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.386127   65047 system_pods.go:61] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.386133   65047 system_pods.go:61] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.386137   65047 system_pods.go:61] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.386140   65047 system_pods.go:61] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.386143   65047 system_pods.go:61] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.386150   65047 system_pods.go:61] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.386156   65047 system_pods.go:61] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.386165   65047 system_pods.go:74] duration metric: took 3.931202298s to wait for pod list to return data ...
	I0404 22:59:49.386176   65047 default_sa.go:34] waiting for default service account to be created ...
	I0404 22:59:49.389035   65047 default_sa.go:45] found service account: "default"
	I0404 22:59:49.389067   65047 default_sa.go:55] duration metric: took 2.877732ms for default service account to be created ...
	I0404 22:59:49.389079   65047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 22:59:49.394829   65047 system_pods.go:86] 8 kube-system pods found
	I0404 22:59:49.394859   65047 system_pods.go:89] "coredns-7db6d8ff4d-wr424" [3ede65fe-7ab4-443f-8cae-a6ea4cd27985] Running
	I0404 22:59:49.394867   65047 system_pods.go:89] "etcd-no-preload-024416" [77b9de8d-c262-474c-a30a-8e60d295186b] Running
	I0404 22:59:49.394875   65047 system_pods.go:89] "kube-apiserver-no-preload-024416" [d1894a66-b14b-479a-a741-7756f44a54b8] Running
	I0404 22:59:49.394882   65047 system_pods.go:89] "kube-controller-manager-no-preload-024416" [6d54b8f5-f3d1-4648-8aaa-48e81c7b750d] Running
	I0404 22:59:49.394888   65047 system_pods.go:89] "kube-proxy-zmx89" [2d643ba1-44fb-4783-8d5b-df8a4c0f29fa] Running
	I0404 22:59:49.394896   65047 system_pods.go:89] "kube-scheduler-no-preload-024416" [7b0e9681-f43f-4fbe-b6e3-117c9adb8deb] Running
	I0404 22:59:49.394904   65047 system_pods.go:89] "metrics-server-569cc877fc-5q4ff" [206d3fa3-2f7f-4852-860b-d9f00c868894] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 22:59:49.394912   65047 system_pods.go:89] "storage-provisioner" [b0555d8c-489e-4265-9930-c8f4424cd77b] Running
	I0404 22:59:49.394922   65047 system_pods.go:126] duration metric: took 5.837975ms to wait for k8s-apps to be running ...
	I0404 22:59:49.394931   65047 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 22:59:49.394980   65047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:59:49.414076   65047 system_svc.go:56] duration metric: took 19.132995ms WaitForService to wait for kubelet
	I0404 22:59:49.414111   65047 kubeadm.go:576] duration metric: took 4m22.928735837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 22:59:49.414133   65047 node_conditions.go:102] verifying NodePressure condition ...
	I0404 22:59:49.417160   65047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 22:59:49.417189   65047 node_conditions.go:123] node cpu capacity is 2
	I0404 22:59:49.417204   65047 node_conditions.go:105] duration metric: took 3.065597ms to run NodePressure ...
	I0404 22:59:49.417218   65047 start.go:240] waiting for startup goroutines ...
	I0404 22:59:49.417228   65047 start.go:245] waiting for cluster config update ...
	I0404 22:59:49.417240   65047 start.go:254] writing updated cluster config ...
	I0404 22:59:49.417615   65047 ssh_runner.go:195] Run: rm -f paused
	I0404 22:59:49.470214   65047 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0404 22:59:49.472584   65047 out.go:177] * Done! kubectl is now configured to use "no-preload-024416" cluster and "default" namespace by default
	I0404 22:59:47.361662   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:49.861684   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:52.360447   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:54.361574   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:56.361652   64791 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace has status "Ready":"False"
	I0404 22:59:57.854723   64791 pod_ready.go:81] duration metric: took 4m0.000936307s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" ...
	E0404 22:59:57.854770   64791 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-zbl54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0404 22:59:57.854788   64791 pod_ready.go:38] duration metric: took 4m7.483622498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 22:59:57.854822   64791 kubeadm.go:591] duration metric: took 4m16.210645162s to restartPrimaryControlPlane
	W0404 22:59:57.854889   64791 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0404 22:59:57.854920   64791 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:00:11.689226   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:00:11.689589   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:11.689862   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:16.690194   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:16.690413   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.113988   64791 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.259044217s)
	I0404 23:00:30.114079   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:30.130372   64791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0404 23:00:30.141114   64791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:00:30.151572   64791 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:00:30.151606   64791 kubeadm.go:156] found existing configuration files:
	
	I0404 23:00:30.151649   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0404 23:00:30.162006   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:00:30.162058   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:00:30.172386   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0404 23:00:30.182463   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:00:30.182526   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:00:30.192462   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.202932   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:00:30.203003   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:00:30.212623   64791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0404 23:00:30.222016   64791 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:00:30.222079   64791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:00:30.231912   64791 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:00:30.292277   64791 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0404 23:00:30.292356   64791 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:00:30.453305   64791 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:00:30.453442   64791 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:00:30.453626   64791 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:00:30.680949   64791 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:00:26.690539   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:26.690756   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:30.683066   64791 out.go:204]   - Generating certificates and keys ...
	I0404 23:00:30.683154   64791 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:00:30.683236   64791 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:00:30.683345   64791 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:00:30.683428   64791 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:00:30.683546   64791 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:00:30.683635   64791 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:00:30.683733   64791 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:00:30.683815   64791 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:00:30.684097   64791 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:00:30.684545   64791 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:00:30.684950   64791 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:00:30.685055   64791 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:00:30.969937   64791 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:00:31.196768   64791 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0404 23:00:31.562187   64791 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:00:32.005580   64791 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:00:32.066695   64791 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:00:32.067434   64791 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:00:32.070078   64791 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:00:32.072059   64791 out.go:204]   - Booting up control plane ...
	I0404 23:00:32.072188   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:00:32.072299   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:00:32.072803   64791 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:00:32.094384   64791 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:00:32.095424   64791 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:00:32.095565   64791 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:00:32.235639   64791 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:00:37.740974   64791 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503079 seconds
	I0404 23:00:37.757121   64791 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0404 23:00:37.771037   64791 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0404 23:00:38.311403   64791 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0404 23:00:38.311608   64791 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-952083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0404 23:00:38.826726   64791 kubeadm.go:309] [bootstrap-token] Using token: fa5m5r.x1an64r8vrgp89m4
	I0404 23:00:38.828302   64791 out.go:204]   - Configuring RBAC rules ...
	I0404 23:00:38.828445   64791 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0404 23:00:38.835805   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0404 23:00:38.846727   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0404 23:00:38.850494   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0404 23:00:38.854653   64791 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0404 23:00:38.859330   64791 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0404 23:00:38.882227   64791 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0404 23:00:39.129283   64791 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0404 23:00:39.263855   64791 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0404 23:00:39.265385   64791 kubeadm.go:309] 
	I0404 23:00:39.265534   64791 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0404 23:00:39.265556   64791 kubeadm.go:309] 
	I0404 23:00:39.265675   64791 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0404 23:00:39.265697   64791 kubeadm.go:309] 
	I0404 23:00:39.265728   64791 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0404 23:00:39.265816   64791 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0404 23:00:39.265873   64791 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0404 23:00:39.265883   64791 kubeadm.go:309] 
	I0404 23:00:39.265948   64791 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0404 23:00:39.265957   64791 kubeadm.go:309] 
	I0404 23:00:39.266009   64791 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0404 23:00:39.266018   64791 kubeadm.go:309] 
	I0404 23:00:39.266072   64791 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0404 23:00:39.266189   64791 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0404 23:00:39.266303   64791 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0404 23:00:39.266313   64791 kubeadm.go:309] 
	I0404 23:00:39.266445   64791 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0404 23:00:39.266542   64791 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0404 23:00:39.266555   64791 kubeadm.go:309] 
	I0404 23:00:39.266661   64791 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.266828   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c \
	I0404 23:00:39.266878   64791 kubeadm.go:309] 	--control-plane 
	I0404 23:00:39.266887   64791 kubeadm.go:309] 
	I0404 23:00:39.267081   64791 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0404 23:00:39.267099   64791 kubeadm.go:309] 
	I0404 23:00:39.267206   64791 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token fa5m5r.x1an64r8vrgp89m4 \
	I0404 23:00:39.267353   64791 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d1b4c34fcca07ffbf2af1753492139507cfa485eea4d9a2d34b5773e7fe8fc5c 
	I0404 23:00:39.278406   64791 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:00:39.278440   64791 cni.go:84] Creating CNI manager for ""
	I0404 23:00:39.278450   64791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 23:00:39.280130   64791 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0404 23:00:39.281491   64791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0404 23:00:39.300708   64791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0404 23:00:39.411609   64791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0404 23:00:39.411700   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:39.411741   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-952083 minikube.k8s.io/updated_at=2024_04_04T23_00_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=63fc7bb20d0e21b4e249dbe479530458cf75526a minikube.k8s.io/name=default-k8s-diff-port-952083 minikube.k8s.io/primary=true
	I0404 23:00:39.491213   64791 ops.go:34] apiserver oom_adj: -16
	I0404 23:00:39.639999   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.140239   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:40.640887   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.140139   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:41.641048   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.140111   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:42.640439   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.140701   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:43.640565   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.140884   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:44.640405   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.140665   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:45.640037   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.140251   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.640442   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:46.691433   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:00:46.691736   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:00:47.140207   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:47.640279   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.141046   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:48.640259   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.140525   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:49.640869   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.140946   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:50.640215   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.140706   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:51.640589   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.140926   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.640774   64791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0404 23:00:52.755392   64791 kubeadm.go:1107] duration metric: took 13.343764127s to wait for elevateKubeSystemPrivileges
	W0404 23:00:52.755430   64791 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0404 23:00:52.755438   64791 kubeadm.go:393] duration metric: took 5m11.165500768s to StartCluster
	I0404 23:00:52.755452   64791 settings.go:142] acquiring lock: {Name:mke696dd9e5448ed18ebb85f4408d7127196ca28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.755542   64791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 23:00:52.757921   64791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/kubeconfig: {Name:mkb693316b40cfc8a4690ffb2a888dd615c310ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 23:00:52.758240   64791 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0404 23:00:52.760258   64791 out.go:177] * Verifying Kubernetes components...
	I0404 23:00:52.758360   64791 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0404 23:00:52.758468   64791 config.go:182] Loaded profile config "default-k8s-diff-port-952083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 23:00:52.761972   64791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0404 23:00:52.761988   64791 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.761992   64791 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762023   64791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-952083"
	I0404 23:00:52.762033   64791 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762045   64791 addons.go:243] addon metrics-server should already be in state true
	I0404 23:00:52.761969   64791 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-952083"
	I0404 23:00:52.762082   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762089   64791 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.762098   64791 addons.go:243] addon storage-provisioner should already be in state true
	I0404 23:00:52.762120   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.762369   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762402   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762483   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.762458   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.762595   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.780081   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0404 23:00:52.780630   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.782784   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0404 23:00:52.782816   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0404 23:00:52.783239   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783487   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.783765   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783788   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.783934   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.783952   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784138   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784378   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.784386   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.784518   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.784536   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.784883   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.785321   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785347   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.785391   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.785423   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.788871   64791 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-952083"
	W0404 23:00:52.788894   64791 addons.go:243] addon default-storageclass should already be in state true
	I0404 23:00:52.788931   64791 host.go:66] Checking if "default-k8s-diff-port-952083" exists ...
	I0404 23:00:52.789296   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.789333   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.804398   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0404 23:00:52.804904   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.805654   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.805676   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.806123   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.806391   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.806825   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0404 23:00:52.807228   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.807701   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.807729   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.808198   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.808222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.808414   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.810322   64791 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0404 23:00:52.809224   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0404 23:00:52.809886   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.811775   64791 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:52.811788   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0404 23:00:52.811806   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.813354   64791 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0404 23:00:52.812191   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.814489   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.814754   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0404 23:00:52.814765   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0404 23:00:52.814784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.815034   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.815192   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.815469   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.815641   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.815811   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.815997   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.816011   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.816227   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.816944   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.817981   64791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 23:00:52.818023   64791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 23:00:52.818260   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818295   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.818316   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.818487   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.818752   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.819016   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.819223   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.835713   64791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0404 23:00:52.836456   64791 main.go:141] libmachine: () Calling .GetVersion
	I0404 23:00:52.837140   64791 main.go:141] libmachine: Using API Version  1
	I0404 23:00:52.837168   64791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 23:00:52.837987   64791 main.go:141] libmachine: () Calling .GetMachineName
	I0404 23:00:52.838475   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetState
	I0404 23:00:52.840280   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .DriverName
	I0404 23:00:52.840539   64791 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:52.840558   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0404 23:00:52.840576   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHHostname
	I0404 23:00:52.843852   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844222   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:af", ip: ""} in network mk-default-k8s-diff-port-952083: {Iface:virbr4 ExpiryTime:2024-04-04 23:55:28 +0000 UTC Type:0 Mac:52:54:00:5c:a7:af Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:default-k8s-diff-port-952083 Clientid:01:52:54:00:5c:a7:af}
	I0404 23:00:52.844244   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | domain default-k8s-diff-port-952083 has defined IP address 192.168.72.148 and MAC address 52:54:00:5c:a7:af in network mk-default-k8s-diff-port-952083
	I0404 23:00:52.844418   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHPort
	I0404 23:00:52.844593   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHKeyPath
	I0404 23:00:52.844757   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .GetSSHUsername
	I0404 23:00:52.844899   64791 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/default-k8s-diff-port-952083/id_rsa Username:docker}
	I0404 23:00:52.960152   64791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0404 23:00:52.982987   64791 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992420   64791 node_ready.go:49] node "default-k8s-diff-port-952083" has status "Ready":"True"
	I0404 23:00:52.992460   64791 node_ready.go:38] duration metric: took 9.431627ms for node "default-k8s-diff-port-952083" to be "Ready" ...
	I0404 23:00:52.992472   64791 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:52.998485   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:53.098746   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0404 23:00:53.098775   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0404 23:00:53.122387   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0404 23:00:53.122418   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0404 23:00:53.123566   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0404 23:00:53.143424   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0404 23:00:53.165026   64791 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.165052   64791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0404 23:00:53.225963   64791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0404 23:00:53.720949   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720969   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.720980   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.720988   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721380   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721399   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721407   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721413   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721415   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721423   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.721433   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721425   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.721707   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.721763   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721768   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.721774   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721781   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:53.721784   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:53.754309   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:53.754337   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:53.754642   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:53.754661   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.174695   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.174726   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175070   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175111   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175133   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175146   64791 main.go:141] libmachine: Making call to close driver server
	I0404 23:00:54.175154   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) Calling .Close
	I0404 23:00:54.175397   64791 main.go:141] libmachine: Successfully made call to close driver server
	I0404 23:00:54.175443   64791 main.go:141] libmachine: Making call to close connection to plugin binary
	I0404 23:00:54.175446   64791 main.go:141] libmachine: (default-k8s-diff-port-952083) DBG | Closing plugin on server side
	I0404 23:00:54.175454   64791 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-952083"
	I0404 23:00:54.177650   64791 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0404 23:00:54.179204   64791 addons.go:505] duration metric: took 1.420843749s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0404 23:00:55.012235   64791 pod_ready.go:102] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"False"
	I0404 23:00:56.006950   64791 pod_ready.go:92] pod "coredns-76f75df574-t2l7m" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.006974   64791 pod_ready.go:81] duration metric: took 3.00846182s for pod "coredns-76f75df574-t2l7m" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.006983   64791 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.012991   64791 pod_ready.go:92] pod "coredns-76f75df574-vnzlh" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.013015   64791 pod_ready.go:81] duration metric: took 6.025362ms for pod "coredns-76f75df574-vnzlh" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.013028   64791 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.017976   64791 pod_ready.go:92] pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.017999   64791 pod_ready.go:81] duration metric: took 4.963826ms for pod "etcd-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.018008   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022797   64791 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.022816   64791 pod_ready.go:81] duration metric: took 4.802173ms for pod "kube-apiserver-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.022825   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.027987   64791 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.028011   64791 pod_ready.go:81] duration metric: took 5.178244ms for pod "kube-controller-manager-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.028024   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402956   64791 pod_ready.go:92] pod "kube-proxy-lbw9b" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.402978   64791 pod_ready.go:81] duration metric: took 374.945741ms for pod "kube-proxy-lbw9b" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.402998   64791 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806688   64791 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace has status "Ready":"True"
	I0404 23:00:56.806723   64791 pod_ready.go:81] duration metric: took 403.715948ms for pod "kube-scheduler-default-k8s-diff-port-952083" in "kube-system" namespace to be "Ready" ...
	I0404 23:00:56.806734   64791 pod_ready.go:38] duration metric: took 3.814250651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0404 23:00:56.806748   64791 api_server.go:52] waiting for apiserver process to appear ...
	I0404 23:00:56.806804   64791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 23:00:56.823558   64791 api_server.go:72] duration metric: took 4.065275824s to wait for apiserver process to appear ...
	I0404 23:00:56.823582   64791 api_server.go:88] waiting for apiserver healthz status ...
	I0404 23:00:56.823602   64791 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8444/healthz ...
	I0404 23:00:56.828204   64791 api_server.go:279] https://192.168.72.148:8444/healthz returned 200:
	ok
	I0404 23:00:56.829616   64791 api_server.go:141] control plane version: v1.29.3
	I0404 23:00:56.829647   64791 api_server.go:131] duration metric: took 6.059596ms to wait for apiserver health ...
	I0404 23:00:56.829654   64791 system_pods.go:43] waiting for kube-system pods to appear ...
	I0404 23:00:57.009301   64791 system_pods.go:59] 9 kube-system pods found
	I0404 23:00:57.009336   64791 system_pods.go:61] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.009343   64791 system_pods.go:61] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.009349   64791 system_pods.go:61] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.009354   64791 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.009359   64791 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.009364   64791 system_pods.go:61] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.009368   64791 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.009377   64791 system_pods.go:61] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.009382   64791 system_pods.go:61] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.009394   64791 system_pods.go:74] duration metric: took 179.733932ms to wait for pod list to return data ...
	I0404 23:00:57.009404   64791 default_sa.go:34] waiting for default service account to be created ...
	I0404 23:00:57.203053   64791 default_sa.go:45] found service account: "default"
	I0404 23:00:57.203080   64791 default_sa.go:55] duration metric: took 193.668691ms for default service account to be created ...
	I0404 23:00:57.203092   64791 system_pods.go:116] waiting for k8s-apps to be running ...
	I0404 23:00:57.407952   64791 system_pods.go:86] 9 kube-system pods found
	I0404 23:00:57.407986   64791 system_pods.go:89] "coredns-76f75df574-t2l7m" [dcc43d3e-d639-462b-81f1-d4abcdcdbe91] Running
	I0404 23:00:57.407992   64791 system_pods.go:89] "coredns-76f75df574-vnzlh" [acab1107-bd9a-4767-bbcd-705faf9e4dea] Running
	I0404 23:00:57.407996   64791 system_pods.go:89] "etcd-default-k8s-diff-port-952083" [21ea38ec-f9b5-41b5-a9c0-120b61308aae] Running
	I0404 23:00:57.408001   64791 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-952083" [68a10abd-7d09-4e1e-a6c8-b09b4b99dc94] Running
	I0404 23:00:57.408005   64791 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-952083" [a7cb0d00-0163-4052-8678-f8b5eaf2cf88] Running
	I0404 23:00:57.408009   64791 system_pods.go:89] "kube-proxy-lbw9b" [6b7f4531-39e8-4c2c-a4b5-984c7c2b2d6a] Running
	I0404 23:00:57.408013   64791 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-952083" [30098a45-967c-4fc0-8813-76b6887d0042] Running
	I0404 23:00:57.408021   64791 system_pods.go:89] "metrics-server-57f55c9bc5-szq42" [23572301-f885-4efd-bbd9-0931b448184f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0404 23:00:57.408025   64791 system_pods.go:89] "storage-provisioner" [0b001dd3-825c-43ed-903d-669afc75f79c] Running
	I0404 23:00:57.408033   64791 system_pods.go:126] duration metric: took 204.93565ms to wait for k8s-apps to be running ...
	I0404 23:00:57.408044   64791 system_svc.go:44] waiting for kubelet service to be running ....
	I0404 23:00:57.408086   64791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:00:57.426022   64791 system_svc.go:56] duration metric: took 17.970809ms WaitForService to wait for kubelet
	I0404 23:00:57.426056   64791 kubeadm.go:576] duration metric: took 4.66777886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0404 23:00:57.426088   64791 node_conditions.go:102] verifying NodePressure condition ...
	I0404 23:00:57.603468   64791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0404 23:00:57.603495   64791 node_conditions.go:123] node cpu capacity is 2
	I0404 23:00:57.603508   64791 node_conditions.go:105] duration metric: took 177.414876ms to run NodePressure ...
	I0404 23:00:57.603522   64791 start.go:240] waiting for startup goroutines ...
	I0404 23:00:57.603532   64791 start.go:245] waiting for cluster config update ...
	I0404 23:00:57.603544   64791 start.go:254] writing updated cluster config ...
	I0404 23:00:57.603820   64791 ssh_runner.go:195] Run: rm -f paused
	I0404 23:00:57.652962   64791 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0404 23:00:57.655346   64791 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-952083" cluster and "default" namespace by default
	I0404 23:01:26.692667   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:01:26.693040   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:01:26.693065   65393 kubeadm.go:309] 
	I0404 23:01:26.693121   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:01:26.693176   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:01:26.693188   65393 kubeadm.go:309] 
	I0404 23:01:26.693230   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:01:26.693296   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:01:26.693448   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:01:26.693460   65393 kubeadm.go:309] 
	I0404 23:01:26.693615   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:01:26.693668   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:01:26.693715   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:01:26.693724   65393 kubeadm.go:309] 
	I0404 23:01:26.693859   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:01:26.693972   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:01:26.693981   65393 kubeadm.go:309] 
	I0404 23:01:26.694101   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:01:26.694189   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:01:26.694308   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:01:26.694419   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:01:26.694439   65393 kubeadm.go:309] 
	I0404 23:01:26.695392   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:01:26.695534   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:01:26.695645   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0404 23:01:26.695786   65393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0404 23:01:26.695852   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0404 23:01:27.175340   65393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 23:01:27.191436   65393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0404 23:01:27.202985   65393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0404 23:01:27.203023   65393 kubeadm.go:156] found existing configuration files:
	
	I0404 23:01:27.203095   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0404 23:01:27.214266   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0404 23:01:27.214326   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0404 23:01:27.225823   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0404 23:01:27.236219   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0404 23:01:27.236291   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0404 23:01:27.247062   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.257772   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0404 23:01:27.257838   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0404 23:01:27.270595   65393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0404 23:01:27.283809   65393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0404 23:01:27.283884   65393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0404 23:01:27.294917   65393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0404 23:01:27.370268   65393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0404 23:01:27.370431   65393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0404 23:01:27.531171   65393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0404 23:01:27.531309   65393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0404 23:01:27.531502   65393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0404 23:01:27.741128   65393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0404 23:01:27.743555   65393 out.go:204]   - Generating certificates and keys ...
	I0404 23:01:27.743674   65393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0404 23:01:27.743778   65393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0404 23:01:27.743900   65393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0404 23:01:27.744020   65393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0404 23:01:27.744144   65393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0404 23:01:27.744231   65393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0404 23:01:27.744396   65393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0404 23:01:27.744532   65393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0404 23:01:27.744762   65393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0404 23:01:27.745172   65393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0404 23:01:27.745405   65393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0404 23:01:27.745902   65393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0404 23:01:27.811633   65393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0404 23:01:27.874609   65393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0404 23:01:28.009290   65393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0404 23:01:28.171654   65393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0404 23:01:28.193647   65393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0404 23:01:28.194533   65393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0404 23:01:28.194615   65393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0404 23:01:28.345640   65393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0404 23:01:28.348028   65393 out.go:204]   - Booting up control plane ...
	I0404 23:01:28.348233   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0404 23:01:28.352245   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0404 23:01:28.354730   65393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0404 23:01:28.354860   65393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0404 23:01:28.366484   65393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0404 23:02:08.368069   65393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0404 23:02:08.368190   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:08.368485   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:13.368934   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:13.369212   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:23.369698   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:23.369947   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:02:43.370821   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:02:43.371097   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372612   65393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0404 23:03:23.372925   65393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0404 23:03:23.372952   65393 kubeadm.go:309] 
	I0404 23:03:23.373009   65393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0404 23:03:23.373060   65393 kubeadm.go:309] 		timed out waiting for the condition
	I0404 23:03:23.373083   65393 kubeadm.go:309] 
	I0404 23:03:23.373139   65393 kubeadm.go:309] 	This error is likely caused by:
	I0404 23:03:23.373204   65393 kubeadm.go:309] 		- The kubelet is not running
	I0404 23:03:23.373444   65393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0404 23:03:23.373460   65393 kubeadm.go:309] 
	I0404 23:03:23.373609   65393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0404 23:03:23.373664   65393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0404 23:03:23.373709   65393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0404 23:03:23.373720   65393 kubeadm.go:309] 
	I0404 23:03:23.373881   65393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0404 23:03:23.373993   65393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0404 23:03:23.374004   65393 kubeadm.go:309] 
	I0404 23:03:23.374160   65393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0404 23:03:23.374293   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0404 23:03:23.374425   65393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0404 23:03:23.374542   65393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0404 23:03:23.374565   65393 kubeadm.go:309] 
	I0404 23:03:23.375946   65393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0404 23:03:23.376063   65393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0404 23:03:23.376228   65393 kubeadm.go:393] duration metric: took 7m58.828175379s to StartCluster
	I0404 23:03:23.376287   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0404 23:03:23.376229   65393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0404 23:03:23.376350   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0404 23:03:23.427597   65393 cri.go:89] found id: ""
	I0404 23:03:23.427625   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.427637   65393 logs.go:278] No container was found matching "kube-apiserver"
	I0404 23:03:23.427644   65393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0404 23:03:23.427709   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0404 23:03:23.466127   65393 cri.go:89] found id: ""
	I0404 23:03:23.466158   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.466168   65393 logs.go:278] No container was found matching "etcd"
	I0404 23:03:23.466176   65393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0404 23:03:23.466240   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0404 23:03:23.509244   65393 cri.go:89] found id: ""
	I0404 23:03:23.509287   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.509295   65393 logs.go:278] No container was found matching "coredns"
	I0404 23:03:23.509301   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0404 23:03:23.509347   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0404 23:03:23.553691   65393 cri.go:89] found id: ""
	I0404 23:03:23.553722   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.553732   65393 logs.go:278] No container was found matching "kube-scheduler"
	I0404 23:03:23.553740   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0404 23:03:23.553807   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0404 23:03:23.601941   65393 cri.go:89] found id: ""
	I0404 23:03:23.601965   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.601973   65393 logs.go:278] No container was found matching "kube-proxy"
	I0404 23:03:23.601979   65393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0404 23:03:23.602037   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0404 23:03:23.639767   65393 cri.go:89] found id: ""
	I0404 23:03:23.639802   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.639811   65393 logs.go:278] No container was found matching "kube-controller-manager"
	I0404 23:03:23.639817   65393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0404 23:03:23.639875   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0404 23:03:23.680136   65393 cri.go:89] found id: ""
	I0404 23:03:23.680168   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.680179   65393 logs.go:278] No container was found matching "kindnet"
	I0404 23:03:23.680187   65393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0404 23:03:23.680246   65393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0404 23:03:23.719745   65393 cri.go:89] found id: ""
	I0404 23:03:23.719767   65393 logs.go:276] 0 containers: []
	W0404 23:03:23.719774   65393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0404 23:03:23.719784   65393 logs.go:123] Gathering logs for kubelet ...
	I0404 23:03:23.719797   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0404 23:03:23.776065   65393 logs.go:123] Gathering logs for dmesg ...
	I0404 23:03:23.776105   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0404 23:03:23.791442   65393 logs.go:123] Gathering logs for describe nodes ...
	I0404 23:03:23.791469   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0404 23:03:23.884793   65393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0404 23:03:23.884820   65393 logs.go:123] Gathering logs for CRI-O ...
	I0404 23:03:23.884836   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0404 23:03:24.001882   65393 logs.go:123] Gathering logs for container status ...
	I0404 23:03:24.001926   65393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0404 23:03:24.056020   65393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0404 23:03:24.056075   65393 out.go:239] * 
	W0404 23:03:24.056157   65393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.056191   65393 out.go:239] * 
	W0404 23:03:24.057119   65393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0404 23:03:24.060566   65393 out.go:177] 
	W0404 23:03:24.061904   65393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0404 23:03:24.061981   65393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0404 23:03:24.062009   65393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0404 23:03:24.063812   65393 out.go:177] 
	
	
	==> CRI-O <==
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.867831991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272467867808245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fedab8e-cf29-47e1-946d-1d69272a47be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.868712714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59cf5d20-c463-476d-8194-6f64eb2162e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.868783543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59cf5d20-c463-476d-8194-6f64eb2162e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.868863401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=59cf5d20-c463-476d-8194-6f64eb2162e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.907292346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9aa717c5-44c8-410f-9a38-1c16b12ad205 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.907420353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9aa717c5-44c8-410f-9a38-1c16b12ad205 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.908894967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b29abed-27dc-4c06-b868-bc0370a79996 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.909338205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272467909314076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b29abed-27dc-4c06-b868-bc0370a79996 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.910267069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7051cd2c-bfc5-4040-89fc-10d3d4a101b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.910348227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7051cd2c-bfc5-4040-89fc-10d3d4a101b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.910415929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7051cd2c-bfc5-4040-89fc-10d3d4a101b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.949441128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e236c05e-7dfa-4ca3-b818-7270b39e1d31 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.949646692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e236c05e-7dfa-4ca3-b818-7270b39e1d31 name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.951083912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bf74bbb-459c-48b1-a6c7-34331ae98a23 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.951653443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272467951626759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bf74bbb-459c-48b1-a6c7-34331ae98a23 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.952202780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a7fab4e-90c3-48a5-9c04-3aaaf73bd08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.952289074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a7fab4e-90c3-48a5-9c04-3aaaf73bd08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.952342176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2a7fab4e-90c3-48a5-9c04-3aaaf73bd08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.990884508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50e113fb-a953-4c89-988a-f3c2f0a93b4d name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.991001961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50e113fb-a953-4c89-988a-f3c2f0a93b4d name=/runtime.v1.RuntimeService/Version
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.992381173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=923fecdf-501a-4569-8865-3c40dee245a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.992985900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712272467992953784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=923fecdf-501a-4569-8865-3c40dee245a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.993732132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa4709a9-a560-4735-89f9-d154fc051075 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.993819840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa4709a9-a560-4735-89f9-d154fc051075 name=/runtime.v1.RuntimeService/ListContainers
	Apr 04 23:14:27 old-k8s-version-343162 crio[651]: time="2024-04-04 23:14:27.993885556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa4709a9-a560-4735-89f9-d154fc051075 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 4 22:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041693] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 4 22:55] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.993320] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.724891] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.065551] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096758] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.200608] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.163985] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312462] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.410618] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.075387] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.725033] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +11.687086] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 4 22:59] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 4 23:01] systemd-fstab-generator[5232]: Ignoring "noauto" option for root device
	[  +0.067974] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:14:28 up 19 min,  0 users,  load average: 0.05, 0.08, 0.03
	Linux old-k8s-version-343162 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000a88ef0)
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c37ef0, 0x4f0ac20, 0xc00036d400, 0x1, 0xc0001000c0)
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000a262a0, 0xc0001000c0)
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b6d070, 0xc0009218a0)
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6680]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 04 23:14:25 old-k8s-version-343162 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 04 23:14:25 old-k8s-version-343162 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 04 23:14:25 old-k8s-version-343162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Apr 04 23:14:25 old-k8s-version-343162 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 04 23:14:25 old-k8s-version-343162 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6690]: I0404 23:14:25.969721    6690 server.go:416] Version: v1.20.0
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6690]: I0404 23:14:25.970021    6690 server.go:837] Client rotation is on, will bootstrap in background
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6690]: I0404 23:14:25.972064    6690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6690]: W0404 23:14:25.973059    6690 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 04 23:14:25 old-k8s-version-343162 kubelet[6690]: I0404 23:14:25.973369    6690 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 2 (252.271731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-343162" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.45s)

                                                
                                    

Test pass (257/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 45.73
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 13.45
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.14
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.0/json-events 13.22
22 TestDownloadOnly/v1.30.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 112.77
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 211.84
38 TestAddons/parallel/Registry 17.19
40 TestAddons/parallel/InspektorGadget 12.18
41 TestAddons/parallel/MetricsServer 6.14
42 TestAddons/parallel/HelmTiller 11.92
44 TestAddons/parallel/CSI 44.68
45 TestAddons/parallel/Headlamp 14.14
46 TestAddons/parallel/CloudSpanner 5.78
47 TestAddons/parallel/LocalPath 56.39
48 TestAddons/parallel/NvidiaDevicePlugin 6.62
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
54 TestCertOptions 76.42
55 TestCertExpiration 330.96
57 TestForceSystemdFlag 70.44
58 TestForceSystemdEnv 54.72
60 TestKVMDriverInstallOrUpdate 4.44
64 TestErrorSpam/setup 43.97
65 TestErrorSpam/start 0.4
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 5.02
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 60.68
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.51
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.84
81 TestFunctional/serial/CacheCmd/cache/add_local 2.23
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 33.79
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.55
92 TestFunctional/serial/LogsFileCmd 1.53
93 TestFunctional/serial/InvalidService 4.03
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 20.36
97 TestFunctional/parallel/DryRun 0.29
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 1.06
103 TestFunctional/parallel/ServiceCmdConnect 8.82
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 51.74
107 TestFunctional/parallel/SSHCmd 0.43
108 TestFunctional/parallel/CpCmd 1.38
109 TestFunctional/parallel/MySQL 25.48
110 TestFunctional/parallel/FileSync 0.22
111 TestFunctional/parallel/CertSync 1.38
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
119 TestFunctional/parallel/License 0.61
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.79
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.6
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
127 TestFunctional/parallel/ImageCommands/Setup 1.86
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 8.22
141 TestFunctional/parallel/ServiceCmd/DeployApp 23.17
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.67
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.54
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
146 TestFunctional/parallel/ServiceCmd/List 0.42
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
149 TestFunctional/parallel/ServiceCmd/Format 0.33
150 TestFunctional/parallel/ServiceCmd/URL 0.38
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
152 TestFunctional/parallel/ProfileCmd/profile_list 0.48
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.56
155 TestFunctional/parallel/MountCmd/any-port 8.97
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
157 TestFunctional/parallel/MountCmd/specific-port 1.81
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.18
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 232.35
166 TestMultiControlPlane/serial/DeployApp 7
167 TestMultiControlPlane/serial/PingHostFromPods 1.32
168 TestMultiControlPlane/serial/AddWorkerNode 46.76
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 13.83
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.52
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.38
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMultiControlPlane/serial/RestartCluster 343.94
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 75.29
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 98.21
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.67
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.42
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.23
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 89.96
219 TestMountStart/serial/StartWithMountFirst 26.97
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 30.66
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.89
224 TestMountStart/serial/VerifyMountPostDelete 0.49
225 TestMountStart/serial/Stop 1.51
226 TestMountStart/serial/RestartStopped 24.65
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 109.5
231 TestMultiNode/serial/DeployApp2Nodes 6.89
232 TestMultiNode/serial/PingHostFrom2Pods 0.89
233 TestMultiNode/serial/AddNode 41.74
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.61
237 TestMultiNode/serial/StopNode 3.18
238 TestMultiNode/serial/StartAfterStop 29.77
240 TestMultiNode/serial/DeleteNode 2.42
242 TestMultiNode/serial/RestartMultiNode 180.66
243 TestMultiNode/serial/ValidateNameConflict 47.82
250 TestScheduledStopUnix 116.32
254 TestRunningBinaryUpgrade 144.84
265 TestNetworkPlugins/group/false 3.95
276 TestStoppedBinaryUpgrade/Setup 2.27
277 TestStoppedBinaryUpgrade/Upgrade 126.6
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
280 TestPause/serial/Start 77.08
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 68.15
284 TestNetworkPlugins/group/auto/Start 121.35
285 TestPause/serial/SecondStartNoReconfiguration 69.3
286 TestNetworkPlugins/group/kindnet/Start 101.46
287 TestNoKubernetes/serial/StartWithStopK8s 46.87
288 TestPause/serial/Pause 0.88
289 TestPause/serial/VerifyStatus 0.28
290 TestPause/serial/Unpause 0.78
291 TestPause/serial/PauseAgain 2.22
292 TestPause/serial/DeletePaused 1.37
293 TestPause/serial/VerifyDeletedResources 0.53
294 TestNetworkPlugins/group/enable-default-cni/Start 71.35
295 TestNoKubernetes/serial/Start 50.52
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.06
297 TestNetworkPlugins/group/auto/KubeletFlags 0.52
298 TestNetworkPlugins/group/auto/NetCatPod 12.59
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
300 TestNetworkPlugins/group/kindnet/NetCatPod 12.24
301 TestNetworkPlugins/group/auto/DNS 0.18
302 TestNetworkPlugins/group/auto/Localhost 0.15
303 TestNetworkPlugins/group/kindnet/DNS 0.18
304 TestNetworkPlugins/group/auto/HairPin 0.16
305 TestNetworkPlugins/group/kindnet/Localhost 0.14
306 TestNetworkPlugins/group/kindnet/HairPin 0.16
307 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
308 TestNoKubernetes/serial/ProfileList 1.62
309 TestNoKubernetes/serial/Stop 1.59
310 TestNoKubernetes/serial/StartNoArgs 26.7
311 TestNetworkPlugins/group/calico/Start 108.02
312 TestNetworkPlugins/group/flannel/Start 131.99
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
315 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
316 TestNetworkPlugins/group/custom-flannel/Start 136.62
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
320 TestNetworkPlugins/group/bridge/Start 167.1
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.24
323 TestNetworkPlugins/group/calico/NetCatPod 11.26
324 TestNetworkPlugins/group/calico/DNS 0.17
325 TestNetworkPlugins/group/calico/Localhost 0.15
326 TestNetworkPlugins/group/calico/HairPin 0.23
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
329 TestNetworkPlugins/group/flannel/NetCatPod 12.37
332 TestNetworkPlugins/group/flannel/DNS 0.19
333 TestNetworkPlugins/group/flannel/Localhost 0.16
334 TestNetworkPlugins/group/flannel/HairPin 0.17
335 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
336 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
337 TestNetworkPlugins/group/custom-flannel/DNS 0.2
338 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
339 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
341 TestStartStop/group/no-preload/serial/FirstStart 151.19
343 TestStartStop/group/embed-certs/serial/FirstStart 116.24
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
345 TestNetworkPlugins/group/bridge/NetCatPod 12.29
346 TestNetworkPlugins/group/bridge/DNS 0.21
347 TestNetworkPlugins/group/bridge/Localhost 0.18
348 TestNetworkPlugins/group/bridge/HairPin 0.18
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.04
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
353 TestStartStop/group/embed-certs/serial/DeployApp 9.33
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
357 TestStartStop/group/no-preload/serial/DeployApp 9.3
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 681.34
365 TestStartStop/group/embed-certs/serial/SecondStart 568.12
367 TestStartStop/group/no-preload/serial/SecondStart 586.29
368 TestStartStop/group/old-k8s-version/serial/Stop 6.33
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/newest-cni/serial/FirstStart 57.61
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
383 TestStartStop/group/newest-cni/serial/Stop 11.37
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
385 TestStartStop/group/newest-cni/serial/SecondStart 37.21
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
389 TestStartStop/group/newest-cni/serial/Pause 2.62
x
+
TestDownloadOnly/v1.20.0/json-events (45.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-878755 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-878755 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.727009297s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (45.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-878755
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-878755: exit status 85 (72.544284ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC |          |
	|         | -p download-only-878755        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:29:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:29:04.232553   12565 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:29:04.232817   12565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:29:04.232827   12565 out.go:304] Setting ErrFile to fd 2...
	I0404 21:29:04.232832   12565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:29:04.233027   12565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	W0404 21:29:04.233156   12565 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16143-5297/.minikube/config/config.json: open /home/jenkins/minikube-integration/16143-5297/.minikube/config/config.json: no such file or directory
	I0404 21:29:04.233705   12565 out.go:298] Setting JSON to true
	I0404 21:29:04.234512   12565 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":690,"bootTime":1712265455,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:29:04.234578   12565 start.go:139] virtualization: kvm guest
	I0404 21:29:04.237224   12565 out.go:97] [download-only-878755] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:29:04.239006   12565 out.go:169] MINIKUBE_LOCATION=16143
	W0404 21:29:04.237343   12565 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball: no such file or directory
	I0404 21:29:04.237426   12565 notify.go:220] Checking for updates...
	I0404 21:29:04.241793   12565 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:29:04.243139   12565 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:29:04.244739   12565 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:29:04.246007   12565 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0404 21:29:04.248360   12565 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0404 21:29:04.248599   12565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:29:04.347581   12565 out.go:97] Using the kvm2 driver based on user configuration
	I0404 21:29:04.347619   12565 start.go:297] selected driver: kvm2
	I0404 21:29:04.347636   12565 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:29:04.347969   12565 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:29:04.348107   12565 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:29:04.363269   12565 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:29:04.363365   12565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:29:04.363827   12565 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0404 21:29:04.363968   12565 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0404 21:29:04.364016   12565 cni.go:84] Creating CNI manager for ""
	I0404 21:29:04.364029   12565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:29:04.364037   12565 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 21:29:04.364088   12565 start.go:340] cluster config:
	{Name:download-only-878755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-878755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:29:04.364322   12565 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:29:04.366337   12565 out.go:97] Downloading VM boot image ...
	I0404 21:29:04.366395   12565 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0404 21:29:13.410263   12565 out.go:97] Starting "download-only-878755" primary control-plane node in "download-only-878755" cluster
	I0404 21:29:13.410310   12565 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 21:29:13.874508   12565 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 21:29:13.874541   12565 cache.go:56] Caching tarball of preloaded images
	I0404 21:29:13.874713   12565 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 21:29:13.876825   12565 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0404 21:29:13.876851   12565 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0404 21:29:13.975041   12565 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0404 21:29:25.535607   12565 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0404 21:29:25.535698   12565 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0404 21:29:26.442363   12565 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0404 21:29:26.442704   12565 profile.go:143] Saving config to /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/download-only-878755/config.json ...
	I0404 21:29:26.442737   12565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/download-only-878755/config.json: {Name:mk6d837c1ddd481b00945d75d094b0bc7f994cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0404 21:29:26.442889   12565 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0404 21:29:26.443059   12565 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-878755 host does not exist
	  To start a cluster, run: "minikube start -p download-only-878755"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-878755
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (13.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-688290 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-688290 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.450542553s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (13.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-688290
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-688290: exit status 85 (73.734405ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC |                     |
	|         | -p download-only-878755        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC | 04 Apr 24 21:29 UTC |
	| delete  | -p download-only-878755        | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC | 04 Apr 24 21:29 UTC |
	| start   | -o=json --download-only        | download-only-688290 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC |                     |
	|         | -p download-only-688290        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:29:50.300583   12816 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:29:50.300705   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:29:50.300715   12816 out.go:304] Setting ErrFile to fd 2...
	I0404 21:29:50.300719   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:29:50.300870   12816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:29:50.301404   12816 out.go:298] Setting JSON to true
	I0404 21:29:50.302220   12816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":736,"bootTime":1712265455,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:29:50.302275   12816 start.go:139] virtualization: kvm guest
	I0404 21:29:50.304707   12816 out.go:97] [download-only-688290] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:29:50.306424   12816 out.go:169] MINIKUBE_LOCATION=16143
	I0404 21:29:50.304854   12816 notify.go:220] Checking for updates...
	I0404 21:29:50.309350   12816 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:29:50.310726   12816 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:29:50.312044   12816 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:29:50.313410   12816 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0404 21:29:50.315990   12816 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0404 21:29:50.316255   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:29:50.347767   12816 out.go:97] Using the kvm2 driver based on user configuration
	I0404 21:29:50.347823   12816 start.go:297] selected driver: kvm2
	I0404 21:29:50.347831   12816 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:29:50.348200   12816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:29:50.348285   12816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:29:50.363757   12816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:29:50.363813   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:29:50.364369   12816 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0404 21:29:50.364537   12816 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0404 21:29:50.364615   12816 cni.go:84] Creating CNI manager for ""
	I0404 21:29:50.364630   12816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:29:50.364639   12816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 21:29:50.364701   12816 start.go:340] cluster config:
	{Name:download-only-688290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-688290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:29:50.364815   12816 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:29:50.366915   12816 out.go:97] Starting "download-only-688290" primary control-plane node in "download-only-688290" cluster
	I0404 21:29:50.366941   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:29:50.881306   12816 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0404 21:29:50.881350   12816 cache.go:56] Caching tarball of preloaded images
	I0404 21:29:50.881515   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0404 21:29:50.883542   12816 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0404 21:29:50.883557   12816 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0404 21:29:50.984030   12816 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-688290 host does not exist
	  To start a cluster, run: "minikube start -p download-only-688290"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-688290
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/json-events (13.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-432080 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-432080 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.218730858s)
--- PASS: TestDownloadOnly/v1.30.0-rc.0/json-events (13.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-432080
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-432080: exit status 85 (72.103629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC |                     |
	|         | -p download-only-878755           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC | 04 Apr 24 21:29 UTC |
	| delete  | -p download-only-878755           | download-only-878755 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC | 04 Apr 24 21:29 UTC |
	| start   | -o=json --download-only           | download-only-688290 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:29 UTC |                     |
	|         | -p download-only-688290           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| delete  | -p download-only-688290           | download-only-688290 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC | 04 Apr 24 21:30 UTC |
	| start   | -o=json --download-only           | download-only-432080 | jenkins | v1.33.0-beta.0 | 04 Apr 24 21:30 UTC |                     |
	|         | -p download-only-432080           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/04 21:30:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0404 21:30:04.094729   12995 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:30:04.094849   12995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:30:04.094857   12995 out.go:304] Setting ErrFile to fd 2...
	I0404 21:30:04.094861   12995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:30:04.095046   12995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:30:04.095612   12995 out.go:298] Setting JSON to true
	I0404 21:30:04.096494   12995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":749,"bootTime":1712265455,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:30:04.096553   12995 start.go:139] virtualization: kvm guest
	I0404 21:30:04.099006   12995 out.go:97] [download-only-432080] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:30:04.100573   12995 out.go:169] MINIKUBE_LOCATION=16143
	I0404 21:30:04.099225   12995 notify.go:220] Checking for updates...
	I0404 21:30:04.103699   12995 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:30:04.105294   12995 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:30:04.106924   12995 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:30:04.108333   12995 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0404 21:30:04.111000   12995 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0404 21:30:04.111211   12995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:30:04.144964   12995 out.go:97] Using the kvm2 driver based on user configuration
	I0404 21:30:04.145008   12995 start.go:297] selected driver: kvm2
	I0404 21:30:04.145018   12995 start.go:901] validating driver "kvm2" against <nil>
	I0404 21:30:04.145483   12995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:30:04.145581   12995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16143-5297/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0404 21:30:04.160256   12995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0404 21:30:04.160305   12995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0404 21:30:04.160794   12995 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0404 21:30:04.160948   12995 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0404 21:30:04.161018   12995 cni.go:84] Creating CNI manager for ""
	I0404 21:30:04.161034   12995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0404 21:30:04.161048   12995 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0404 21:30:04.161111   12995 start.go:340] cluster config:
	{Name:download-only-432080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:download-only-432080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:30:04.161210   12995 iso.go:125] acquiring lock: {Name:mk16d4e4437dbcbe5e0299079ee219866e46c5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0404 21:30:04.163259   12995 out.go:97] Starting "download-only-432080" primary control-plane node in "download-only-432080" cluster
	I0404 21:30:04.163280   12995 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 21:30:04.678320   12995 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0404 21:30:04.678347   12995 cache.go:56] Caching tarball of preloaded images
	I0404 21:30:04.678510   12995 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0404 21:30:04.680704   12995 out.go:97] Downloading Kubernetes v1.30.0-rc.0 preload ...
	I0404 21:30:04.680736   12995 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0404 21:30:04.780015   12995 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8b7303f27cbc36bf6c5aef5b8609bfb -> /home/jenkins/minikube-integration/16143-5297/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-432080 host does not exist
	  To start a cluster, run: "minikube start -p download-only-432080"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-432080
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-012512 --alsologtostderr --binary-mirror http://127.0.0.1:41613 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-012512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-012512
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (112.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-035370 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-035370 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.847915992s)
helpers_test.go:175: Cleaning up "offline-crio-035370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-035370
--- PASS: TestOffline (112.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371778
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-371778: exit status 85 (60.077522ms)

                                                
                                                
-- stdout --
	* Profile "addons-371778" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371778"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371778
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-371778: exit status 85 (61.412304ms)

                                                
                                                
-- stdout --
	* Profile "addons-371778" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371778"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (211.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-371778 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-371778 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.844162885s)
--- PASS: TestAddons/Setup (211.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.484265ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-72422" [75fbb208-e940-4f84-ae37-d85e195edeaf] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011052657s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nw2xt" [aae8dd6b-7489-4a11-91b8-b09ae3009693] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005580738s
addons_test.go:340: (dbg) Run:  kubectl --context addons-371778 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-371778 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-371778 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.185131268s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 ip
2024/04/04 21:34:06 [DEBUG] GET http://192.168.39.212:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.18s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nchdq" [5605a1b9-ec6c-46bd-a4d6-e7d7f7a5816d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004960082s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-371778
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-371778: (6.170490768s)
--- PASS: TestAddons/parallel/InspektorGadget (12.18s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.604915ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-4gcdm" [99896135-c9ec-418c-af55-cb7c8e9bee69] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006041159s
addons_test.go:415: (dbg) Run:  kubectl --context addons-371778 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 addons disable metrics-server --alsologtostderr -v=1: (1.065641533s)
--- PASS: TestAddons/parallel/MetricsServer (6.14s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.279503ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-k2rdd" [012fb8a6-0e59-4491-93b3-98178f8b5f87] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006556003s
addons_test.go:473: (dbg) Run:  kubectl --context addons-371778 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-371778 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.254021539s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.507409ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-371778 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-371778 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5142bde1-f84d-4081-b0da-bc047c1007b1] Pending
helpers_test.go:344: "task-pv-pod" [5142bde1-f84d-4081-b0da-bc047c1007b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5142bde1-f84d-4081-b0da-bc047c1007b1] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004866442s
addons_test.go:584: (dbg) Run:  kubectl --context addons-371778 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-371778 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-371778 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-371778 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-371778 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-371778 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-371778 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5bdee91c-d919-4903-9e52-d67eb694026a] Pending
helpers_test.go:344: "task-pv-pod-restore" [5bdee91c-d919-4903-9e52-d67eb694026a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5bdee91c-d919-4903-9e52-d67eb694026a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004473708s
addons_test.go:626: (dbg) Run:  kubectl --context addons-371778 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-371778 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-371778 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.156355216s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-371778 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-371778 --alsologtostderr -v=1: (1.135526753s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-phlpc" [94f2c53a-d004-4235-ab7f-d56fab607309] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-phlpc" [94f2c53a-d004-4235-ab7f-d56fab607309] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-phlpc" [94f2c53a-d004-4235-ab7f-d56fab607309] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004590739s
--- PASS: TestAddons/parallel/Headlamp (14.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-7jb68" [c76c662d-34f9-4714-b0a8-c6209d406324] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008647567s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-371778
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.39s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-371778 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-371778 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4e615987-9488-4a53-9317-e18ca94525fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4e615987-9488-4a53-9317-e18ca94525fa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4e615987-9488-4a53-9317-e18ca94525fa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00611517s
addons_test.go:891: (dbg) Run:  kubectl --context addons-371778 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 ssh "cat /opt/local-path-provisioner/pvc-5be7a3b0-ba74-4929-8411-99662f07185f_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-371778 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-371778 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-371778 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-371778 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.497887229s)
--- PASS: TestAddons/parallel/LocalPath (56.39s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cnk9f" [ddbb8390-14f9-4749-bf9d-28c23eca618a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006998143s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-371778
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-88stp" [9f86c2c9-62b3-41e7-9373-de69268e4332] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005735354s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-371778 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-371778 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (76.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-754073 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-754073 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.992233731s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-754073 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-754073 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-754073 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-754073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-754073
--- PASS: TestCertOptions (76.42s)

                                                
                                    
x
+
TestCertExpiration (330.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-086102 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-086102 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m49.283703718s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-086102 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0404 22:38:50.479923   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-086102 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.553850323s)
helpers_test.go:175: Cleaning up "cert-expiration-086102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-086102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-086102: (1.125414048s)
--- PASS: TestCertExpiration (330.96s)

                                                
                                    
x
+
TestForceSystemdFlag (70.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-048599 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-048599 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.437872214s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-048599 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-048599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-048599
--- PASS: TestForceSystemdFlag (70.44s)

                                                
                                    
x
+
TestForceSystemdEnv (54.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-436667 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-436667 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.909380027s)
helpers_test.go:175: Cleaning up "force-systemd-env-436667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-436667
--- PASS: TestForceSystemdEnv (54.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                    
x
+
TestErrorSpam/setup (43.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-418302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-418302 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-418302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-418302 --driver=kvm2  --container-runtime=crio: (43.965340064s)
--- PASS: TestErrorSpam/setup (43.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop: (2.305866493s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop: (1.50294041s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-418302 --log_dir /tmp/nospam-418302 stop: (1.209365298s)
--- PASS: TestErrorSpam/stop (5.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16143-5297/.minikube/files/etc/test/nested/copy/12554/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-596385 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m0.680874602s)
--- PASS: TestFunctional/serial/StartWithProxy (60.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-596385 --alsologtostderr -v=8: (36.512391443s)
functional_test.go:659: soft start took 36.512973959s for "functional-596385" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-596385 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:3.1: (1.186118257s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:3.3: (1.213261388s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 cache add registry.k8s.io/pause:latest: (1.441877094s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-596385 /tmp/TestFunctionalserialCacheCmdcacheadd_local2517361506/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache add minikube-local-cache-test:functional-596385
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 cache add minikube-local-cache-test:functional-596385: (1.812781258s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache delete minikube-local-cache-test:functional-596385
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-596385
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.534532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 kubectl -- --context functional-596385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-596385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-596385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.790659081s)
functional_test.go:757: restart took 33.790757791s for "functional-596385" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-596385 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 logs: (1.548928458s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 logs --file /tmp/TestFunctionalserialLogsFileCmd1552895373/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 logs --file /tmp/TestFunctionalserialLogsFileCmd1552895373/001/logs.txt: (1.527940649s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-596385 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-596385
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-596385: exit status 115 (282.782797ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.250:32472 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-596385 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 config get cpus: exit status 14 (64.070246ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 config get cpus: exit status 14 (61.363949ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-596385 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-596385 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20587: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-596385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.609538ms)

                                                
                                                
-- stdout --
	* [functional-596385] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:43:40.533585   20467 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:43:40.533847   20467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:43:40.533856   20467 out.go:304] Setting ErrFile to fd 2...
	I0404 21:43:40.533860   20467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:43:40.534042   20467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:43:40.534565   20467 out.go:298] Setting JSON to false
	I0404 21:43:40.535454   20467 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1566,"bootTime":1712265455,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:43:40.535515   20467 start.go:139] virtualization: kvm guest
	I0404 21:43:40.537976   20467 out.go:177] * [functional-596385] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 21:43:40.539515   20467 notify.go:220] Checking for updates...
	I0404 21:43:40.539538   20467 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:43:40.541026   20467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:43:40.542639   20467 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:43:40.544073   20467 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:43:40.545413   20467 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:43:40.547246   20467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:43:40.549004   20467 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:43:40.549384   20467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:43:40.549428   20467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:43:40.563875   20467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36463
	I0404 21:43:40.564306   20467 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:43:40.564957   20467 main.go:141] libmachine: Using API Version  1
	I0404 21:43:40.565015   20467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:43:40.565361   20467 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:43:40.565620   20467 main.go:141] libmachine: (functional-596385) Calling .DriverName
	I0404 21:43:40.565851   20467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:43:40.566132   20467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:43:40.566161   20467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:43:40.582338   20467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0404 21:43:40.582723   20467 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:43:40.583204   20467 main.go:141] libmachine: Using API Version  1
	I0404 21:43:40.583227   20467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:43:40.583582   20467 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:43:40.583761   20467 main.go:141] libmachine: (functional-596385) Calling .DriverName
	I0404 21:43:40.619862   20467 out.go:177] * Using the kvm2 driver based on existing profile
	I0404 21:43:40.621567   20467 start.go:297] selected driver: kvm2
	I0404 21:43:40.621583   20467 start.go:901] validating driver "kvm2" against &{Name:functional-596385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-596385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:43:40.621718   20467 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:43:40.624225   20467 out.go:177] 
	W0404 21:43:40.625962   20467 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0404 21:43:40.627472   20467 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-596385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.886118ms)

                                                
                                                
-- stdout --
	* [functional-596385] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 21:43:40.826358   20523 out.go:291] Setting OutFile to fd 1 ...
	I0404 21:43:40.826614   20523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:43:40.826624   20523 out.go:304] Setting ErrFile to fd 2...
	I0404 21:43:40.826628   20523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 21:43:40.826976   20523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 21:43:40.827534   20523 out.go:298] Setting JSON to false
	I0404 21:43:40.828512   20523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1566,"bootTime":1712265455,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 21:43:40.828582   20523 start.go:139] virtualization: kvm guest
	I0404 21:43:40.830680   20523 out.go:177] * [functional-596385] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0404 21:43:40.832560   20523 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 21:43:40.832582   20523 notify.go:220] Checking for updates...
	I0404 21:43:40.835863   20523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 21:43:40.837699   20523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 21:43:40.839176   20523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 21:43:40.840659   20523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 21:43:40.842332   20523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 21:43:40.844386   20523 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 21:43:40.844839   20523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:43:40.844893   20523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:43:40.860011   20523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0404 21:43:40.860486   20523 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:43:40.860944   20523 main.go:141] libmachine: Using API Version  1
	I0404 21:43:40.860969   20523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:43:40.861293   20523 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:43:40.861484   20523 main.go:141] libmachine: (functional-596385) Calling .DriverName
	I0404 21:43:40.861754   20523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 21:43:40.862066   20523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 21:43:40.862106   20523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 21:43:40.878601   20523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I0404 21:43:40.878965   20523 main.go:141] libmachine: () Calling .GetVersion
	I0404 21:43:40.879442   20523 main.go:141] libmachine: Using API Version  1
	I0404 21:43:40.879464   20523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 21:43:40.879750   20523 main.go:141] libmachine: () Calling .GetMachineName
	I0404 21:43:40.879926   20523 main.go:141] libmachine: (functional-596385) Calling .DriverName
	I0404 21:43:40.913382   20523 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0404 21:43:40.914924   20523 start.go:297] selected driver: kvm2
	I0404 21:43:40.914937   20523 start.go:901] validating driver "kvm2" against &{Name:functional-596385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-596385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0404 21:43:40.915078   20523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 21:43:40.917423   20523 out.go:177] 
	W0404 21:43:40.918970   20523 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0404 21:43:40.920447   20523 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-596385 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-596385 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wxvt8" [ed68bd74-373e-46a7-94cb-29351105f64b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wxvt8" [ed68bd74-373e-46a7-94cb-29351105f64b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005262835s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.250:30334
functional_test.go:1671: http://192.168.39.250:30334: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-wxvt8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.250:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.250:30334
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c40cdab2-fb81-4108-9818-993db7cdc55b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005150801s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-596385 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-596385 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596385 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596385 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4f4020f6-a1aa-4dd4-b6db-d495d42e9705] Pending
helpers_test.go:344: "sp-pod" [4f4020f6-a1aa-4dd4-b6db-d495d42e9705] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4f4020f6-a1aa-4dd4-b6db-d495d42e9705] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.00646933s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-596385 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-596385 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-596385 delete -f testdata/storage-provisioner/pod.yaml: (1.123515048s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b22f0dd9-9fbd-4a1e-a8ee-f01bcaabd3f9] Pending
helpers_test.go:344: "sp-pod" [b22f0dd9-9fbd-4a1e-a8ee-f01bcaabd3f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b22f0dd9-9fbd-4a1e-a8ee-f01bcaabd3f9] Running
E0404 21:44:00.722088   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
2024/04/04 21:44:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005014494s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-596385 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh -n functional-596385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cp functional-596385:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd47000099/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh -n functional-596385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh -n functional-596385 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-596385 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ht4xn" [f0ee5e01-0272-4d5c-a607-3ed3682e744d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ht4xn" [f0ee5e01-0272-4d5c-a607-3ed3682e744d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00473404s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596385 exec mysql-859648c796-ht4xn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-596385 exec mysql-859648c796-ht4xn -- mysql -ppassword -e "show databases;": exit status 1 (292.06746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596385 exec mysql-859648c796-ht4xn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-596385 exec mysql-859648c796-ht4xn -- mysql -ppassword -e "show databases;": exit status 1 (295.77574ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596385 exec mysql-859648c796-ht4xn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12554/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /etc/test/nested/copy/12554/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12554.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /etc/ssl/certs/12554.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12554.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /usr/share/ca-certificates/12554.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/125542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /etc/ssl/certs/125542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/125542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /usr/share/ca-certificates/125542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-596385 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "sudo systemctl is-active docker": exit status 1 (256.570094ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "sudo systemctl is-active containerd": exit status 1 (230.132645ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596385 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-596385
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-596385
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596385 image ls --format short --alsologtostderr:
I0404 21:43:45.901000   20891 out.go:291] Setting OutFile to fd 1 ...
I0404 21:43:45.901279   20891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:45.901290   20891 out.go:304] Setting ErrFile to fd 2...
I0404 21:43:45.901296   20891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:45.901511   20891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
I0404 21:43:45.902125   20891 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:45.902273   20891 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:45.902813   20891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:45.902861   20891 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:45.918752   20891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39381
I0404 21:43:45.919256   20891 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:45.919831   20891 main.go:141] libmachine: Using API Version  1
I0404 21:43:45.919858   20891 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:45.920225   20891 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:45.920412   20891 main.go:141] libmachine: (functional-596385) Calling .GetState
I0404 21:43:45.922287   20891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:45.922341   20891 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:45.937360   20891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
I0404 21:43:45.937876   20891 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:45.938370   20891 main.go:141] libmachine: Using API Version  1
I0404 21:43:45.938394   20891 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:45.938724   20891 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:45.939049   20891 main.go:141] libmachine: (functional-596385) Calling .DriverName
I0404 21:43:45.939305   20891 ssh_runner.go:195] Run: systemctl --version
I0404 21:43:45.939335   20891 main.go:141] libmachine: (functional-596385) Calling .GetSSHHostname
I0404 21:43:45.942557   20891 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:45.943007   20891 main.go:141] libmachine: (functional-596385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a2:e7", ip: ""} in network mk-functional-596385: {Iface:virbr1 ExpiryTime:2024-04-04 22:40:56 +0000 UTC Type:0 Mac:52:54:00:dd:a2:e7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-596385 Clientid:01:52:54:00:dd:a2:e7}
I0404 21:43:45.943052   20891 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined IP address 192.168.39.250 and MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:45.943182   20891 main.go:141] libmachine: (functional-596385) Calling .GetSSHPort
I0404 21:43:45.943370   20891 main.go:141] libmachine: (functional-596385) Calling .GetSSHKeyPath
I0404 21:43:45.943535   20891 main.go:141] libmachine: (functional-596385) Calling .GetSSHUsername
I0404 21:43:45.943690   20891 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/functional-596385/id_rsa Username:docker}
I0404 21:43:46.186444   20891 ssh_runner.go:195] Run: sudo crictl images --output json
I0404 21:43:46.423806   20891 main.go:141] libmachine: Making call to close driver server
I0404 21:43:46.423827   20891 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:46.424164   20891 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
I0404 21:43:46.424261   20891 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:46.424298   20891 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:46.424322   20891 main.go:141] libmachine: Making call to close driver server
I0404 21:43:46.424330   20891 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:46.424635   20891 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:46.424675   20891 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596385 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-596385  | 069233c17db22 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| gcr.io/google-containers/addon-resizer  | functional-596385  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596385 image ls --format table --alsologtostderr:
I0404 21:43:49.154399   21338 out.go:291] Setting OutFile to fd 1 ...
I0404 21:43:49.154510   21338 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:49.154522   21338 out.go:304] Setting ErrFile to fd 2...
I0404 21:43:49.154527   21338 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:49.154758   21338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
I0404 21:43:49.155341   21338 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:49.155461   21338 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:49.155849   21338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:49.155913   21338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:49.170876   21338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
I0404 21:43:49.171374   21338 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:49.172024   21338 main.go:141] libmachine: Using API Version  1
I0404 21:43:49.172047   21338 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:49.172469   21338 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:49.172652   21338 main.go:141] libmachine: (functional-596385) Calling .GetState
I0404 21:43:49.174553   21338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:49.174594   21338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:49.190014   21338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
I0404 21:43:49.190570   21338 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:49.191102   21338 main.go:141] libmachine: Using API Version  1
I0404 21:43:49.191126   21338 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:49.191488   21338 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:49.191704   21338 main.go:141] libmachine: (functional-596385) Calling .DriverName
I0404 21:43:49.191910   21338 ssh_runner.go:195] Run: systemctl --version
I0404 21:43:49.191939   21338 main.go:141] libmachine: (functional-596385) Calling .GetSSHHostname
I0404 21:43:49.194888   21338 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:49.195338   21338 main.go:141] libmachine: (functional-596385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a2:e7", ip: ""} in network mk-functional-596385: {Iface:virbr1 ExpiryTime:2024-04-04 22:40:56 +0000 UTC Type:0 Mac:52:54:00:dd:a2:e7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-596385 Clientid:01:52:54:00:dd:a2:e7}
I0404 21:43:49.195369   21338 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined IP address 192.168.39.250 and MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:49.195544   21338 main.go:141] libmachine: (functional-596385) Calling .GetSSHPort
I0404 21:43:49.195715   21338 main.go:141] libmachine: (functional-596385) Calling .GetSSHKeyPath
I0404 21:43:49.195911   21338 main.go:141] libmachine: (functional-596385) Calling .GetSSHUsername
I0404 21:43:49.196067   21338 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/functional-596385/id_rsa Username:docker}
I0404 21:43:49.301704   21338 ssh_runner.go:195] Run: sudo crictl images --output json
I0404 21:43:49.367383   21338 main.go:141] libmachine: Making call to close driver server
I0404 21:43:49.367409   21338 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:49.367700   21338 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:49.367724   21338 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:49.367743   21338 main.go:141] libmachine: Making call to close driver server
I0404 21:43:49.367756   21338 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
I0404 21:43:49.367759   21338 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:49.367986   21338 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:49.368003   21338 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596385 image ls --format json --alsologtostderr:
[{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee04
15a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-co
ntroller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":[],"size":"1462480"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f
5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"069233c17db22878c48c9e2876efb837ae4777b44e31ebe42984ac46ed057030","repoDigests":["localhost/minikube-local-cache-test@sha256:1ace92bc1aa25a024d04e79b4cac990d592633968712ecce36578abb13994167"],"repoTags":["localhost/minikube-local-cache-test:functional-596385"],"size":"3328"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee
1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/core
dns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{
"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-596385"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596385 image ls --format json --alsologtostderr:
I0404 21:43:48.805182   21314 out.go:291] Setting OutFile to fd 1 ...
I0404 21:43:48.805328   21314 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:48.805340   21314 out.go:304] Setting ErrFile to fd 2...
I0404 21:43:48.805346   21314 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:48.805591   21314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
I0404 21:43:48.806192   21314 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:48.806310   21314 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:48.806703   21314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:48.806777   21314 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:48.822209   21314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
I0404 21:43:48.822657   21314 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:48.823323   21314 main.go:141] libmachine: Using API Version  1
I0404 21:43:48.823357   21314 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:48.823697   21314 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:48.823920   21314 main.go:141] libmachine: (functional-596385) Calling .GetState
I0404 21:43:48.826153   21314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:48.826200   21314 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:48.841360   21314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
I0404 21:43:48.841836   21314 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:48.842261   21314 main.go:141] libmachine: Using API Version  1
I0404 21:43:48.842275   21314 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:48.842597   21314 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:48.842792   21314 main.go:141] libmachine: (functional-596385) Calling .DriverName
I0404 21:43:48.843020   21314 ssh_runner.go:195] Run: systemctl --version
I0404 21:43:48.843045   21314 main.go:141] libmachine: (functional-596385) Calling .GetSSHHostname
I0404 21:43:48.846094   21314 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:48.846525   21314 main.go:141] libmachine: (functional-596385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a2:e7", ip: ""} in network mk-functional-596385: {Iface:virbr1 ExpiryTime:2024-04-04 22:40:56 +0000 UTC Type:0 Mac:52:54:00:dd:a2:e7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-596385 Clientid:01:52:54:00:dd:a2:e7}
I0404 21:43:48.846562   21314 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined IP address 192.168.39.250 and MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:48.846669   21314 main.go:141] libmachine: (functional-596385) Calling .GetSSHPort
I0404 21:43:48.846900   21314 main.go:141] libmachine: (functional-596385) Calling .GetSSHKeyPath
I0404 21:43:48.847059   21314 main.go:141] libmachine: (functional-596385) Calling .GetSSHUsername
I0404 21:43:48.847202   21314 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/functional-596385/id_rsa Username:docker}
I0404 21:43:48.958854   21314 ssh_runner.go:195] Run: sudo crictl images --output json
I0404 21:43:49.090986   21314 main.go:141] libmachine: Making call to close driver server
I0404 21:43:49.090999   21314 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:49.091313   21314 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:49.091332   21314 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:49.091359   21314 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
I0404 21:43:49.091366   21314 main.go:141] libmachine: Making call to close driver server
I0404 21:43:49.091397   21314 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:49.091626   21314 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:49.091644   21314 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:49.091656   21314 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596385 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-596385
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 069233c17db22878c48c9e2876efb837ae4777b44e31ebe42984ac46ed057030
repoDigests:
- localhost/minikube-local-cache-test@sha256:1ace92bc1aa25a024d04e79b4cac990d592633968712ecce36578abb13994167
repoTags:
- localhost/minikube-local-cache-test:functional-596385
size: "3328"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596385 image ls --format yaml --alsologtostderr:
I0404 21:43:46.499978   20961 out.go:291] Setting OutFile to fd 1 ...
I0404 21:43:46.500108   20961 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:46.500137   20961 out.go:304] Setting ErrFile to fd 2...
I0404 21:43:46.500144   20961 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:46.500440   20961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
I0404 21:43:46.501312   20961 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:46.501457   20961 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:46.502098   20961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:46.502171   20961 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:46.523351   20961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
I0404 21:43:46.523898   20961 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:46.524772   20961 main.go:141] libmachine: Using API Version  1
I0404 21:43:46.524802   20961 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:46.525174   20961 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:46.525397   20961 main.go:141] libmachine: (functional-596385) Calling .GetState
I0404 21:43:46.527850   20961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:46.527907   20961 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:46.546319   20961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
I0404 21:43:46.546894   20961 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:46.547590   20961 main.go:141] libmachine: Using API Version  1
I0404 21:43:46.547640   20961 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:46.548083   20961 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:46.548315   20961 main.go:141] libmachine: (functional-596385) Calling .DriverName
I0404 21:43:46.548543   20961 ssh_runner.go:195] Run: systemctl --version
I0404 21:43:46.548567   20961 main.go:141] libmachine: (functional-596385) Calling .GetSSHHostname
I0404 21:43:46.552458   20961 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:46.552976   20961 main.go:141] libmachine: (functional-596385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a2:e7", ip: ""} in network mk-functional-596385: {Iface:virbr1 ExpiryTime:2024-04-04 22:40:56 +0000 UTC Type:0 Mac:52:54:00:dd:a2:e7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-596385 Clientid:01:52:54:00:dd:a2:e7}
I0404 21:43:46.553006   20961 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined IP address 192.168.39.250 and MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:46.553072   20961 main.go:141] libmachine: (functional-596385) Calling .GetSSHPort
I0404 21:43:46.553262   20961 main.go:141] libmachine: (functional-596385) Calling .GetSSHKeyPath
I0404 21:43:46.553403   20961 main.go:141] libmachine: (functional-596385) Calling .GetSSHUsername
I0404 21:43:46.553685   20961 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/functional-596385/id_rsa Username:docker}
I0404 21:43:46.652377   20961 ssh_runner.go:195] Run: sudo crictl images --output json
I0404 21:43:46.757567   20961 main.go:141] libmachine: Making call to close driver server
I0404 21:43:46.757604   20961 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:46.757922   20961 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:46.757940   20961 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:46.757956   20961 main.go:141] libmachine: Making call to close driver server
I0404 21:43:46.757963   20961 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:46.758205   20961 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:46.758220   20961 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh pgrep buildkitd: exit status 1 (235.468953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image build -t localhost/my-image:functional-596385 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image build -t localhost/my-image:functional-596385 testdata/build --alsologtostderr: (3.410857627s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596385 image build -t localhost/my-image:functional-596385 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5a04eb2bad4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-596385
--> 11ad773a643
Successfully tagged localhost/my-image:functional-596385
11ad773a643729e50922c7c664cbb8f7c59905ed9b47bc4044e31a646e2f07e1
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596385 image build -t localhost/my-image:functional-596385 testdata/build --alsologtostderr:
I0404 21:43:47.054265   21065 out.go:291] Setting OutFile to fd 1 ...
I0404 21:43:47.054543   21065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:47.054554   21065 out.go:304] Setting ErrFile to fd 2...
I0404 21:43:47.054558   21065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0404 21:43:47.054742   21065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
I0404 21:43:47.055311   21065 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:47.055824   21065 config.go:182] Loaded profile config "functional-596385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0404 21:43:47.056343   21065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:47.056422   21065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:47.071336   21065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
I0404 21:43:47.071827   21065 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:47.072362   21065 main.go:141] libmachine: Using API Version  1
I0404 21:43:47.072387   21065 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:47.072729   21065 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:47.072937   21065 main.go:141] libmachine: (functional-596385) Calling .GetState
I0404 21:43:47.075016   21065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0404 21:43:47.075071   21065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0404 21:43:47.091057   21065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
I0404 21:43:47.091502   21065 main.go:141] libmachine: () Calling .GetVersion
I0404 21:43:47.092009   21065 main.go:141] libmachine: Using API Version  1
I0404 21:43:47.092032   21065 main.go:141] libmachine: () Calling .SetConfigRaw
I0404 21:43:47.092348   21065 main.go:141] libmachine: () Calling .GetMachineName
I0404 21:43:47.092537   21065 main.go:141] libmachine: (functional-596385) Calling .DriverName
I0404 21:43:47.092727   21065 ssh_runner.go:195] Run: systemctl --version
I0404 21:43:47.092750   21065 main.go:141] libmachine: (functional-596385) Calling .GetSSHHostname
I0404 21:43:47.095323   21065 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:47.095732   21065 main.go:141] libmachine: (functional-596385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a2:e7", ip: ""} in network mk-functional-596385: {Iface:virbr1 ExpiryTime:2024-04-04 22:40:56 +0000 UTC Type:0 Mac:52:54:00:dd:a2:e7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-596385 Clientid:01:52:54:00:dd:a2:e7}
I0404 21:43:47.095758   21065 main.go:141] libmachine: (functional-596385) DBG | domain functional-596385 has defined IP address 192.168.39.250 and MAC address 52:54:00:dd:a2:e7 in network mk-functional-596385
I0404 21:43:47.095911   21065 main.go:141] libmachine: (functional-596385) Calling .GetSSHPort
I0404 21:43:47.096078   21065 main.go:141] libmachine: (functional-596385) Calling .GetSSHKeyPath
I0404 21:43:47.096243   21065 main.go:141] libmachine: (functional-596385) Calling .GetSSHUsername
I0404 21:43:47.096401   21065 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/functional-596385/id_rsa Username:docker}
I0404 21:43:47.178274   21065 build_images.go:161] Building image from path: /tmp/build.1849722671.tar
I0404 21:43:47.178346   21065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0404 21:43:47.193024   21065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1849722671.tar
I0404 21:43:47.198232   21065 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1849722671.tar: stat -c "%s %y" /var/lib/minikube/build/build.1849722671.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1849722671.tar': No such file or directory
I0404 21:43:47.198289   21065 ssh_runner.go:362] scp /tmp/build.1849722671.tar --> /var/lib/minikube/build/build.1849722671.tar (3072 bytes)
I0404 21:43:47.227279   21065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1849722671
I0404 21:43:47.238941   21065 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1849722671 -xf /var/lib/minikube/build/build.1849722671.tar
I0404 21:43:47.252048   21065 crio.go:315] Building image: /var/lib/minikube/build/build.1849722671
I0404 21:43:47.252167   21065 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-596385 /var/lib/minikube/build/build.1849722671 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0404 21:43:50.330703   21065 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-596385 /var/lib/minikube/build/build.1849722671 --cgroup-manager=cgroupfs: (3.078502142s)
I0404 21:43:50.330818   21065 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1849722671
I0404 21:43:50.367295   21065 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1849722671.tar
I0404 21:43:50.404514   21065 build_images.go:217] Built localhost/my-image:functional-596385 from /tmp/build.1849722671.tar
I0404 21:43:50.404558   21065 build_images.go:133] succeeded building to: functional-596385
I0404 21:43:50.404565   21065 build_images.go:134] failed building to: 
I0404 21:43:50.404593   21065 main.go:141] libmachine: Making call to close driver server
I0404 21:43:50.404603   21065 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:50.404856   21065 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:50.404874   21065 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:50.404888   21065 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
I0404 21:43:50.404933   21065 main.go:141] libmachine: Making call to close driver server
I0404 21:43:50.405020   21065 main.go:141] libmachine: (functional-596385) Calling .Close
I0404 21:43:50.405313   21065 main.go:141] libmachine: Successfully made call to close driver server
I0404 21:43:50.405327   21065 main.go:141] libmachine: Making call to close connection to plugin binary
I0404 21:43:50.405338   21065 main.go:141] libmachine: (functional-596385) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
E0404 21:43:50.480271   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.486361   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.496668   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.516990   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.557306   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.637643   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:50.798204   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:51.118542   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:51.759372   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:53.040266   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:43:55.601429   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.841887335s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-596385
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr: (7.757350326s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-596385 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-596385 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-2xp75" [ed5c114b-cd13-4238-92c5-66eac5ceb156] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-2xp75" [ed5c114b-cd13-4238-92c5-66eac5ceb156] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.004614103s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr: (5.35136315s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.883544886s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-596385
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image load --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr: (5.242629211s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image save gcr.io/google-containers/addon-resizer:functional-596385 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image save gcr.io/google-containers/addon-resizer:functional-596385 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.721596255s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service list -o json
functional_test.go:1490: Took "384.351406ms" to run "out/minikube-linux-amd64 -p functional-596385 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.250:30156
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.250:30156
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "418.493689ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "58.351334ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "406.812336ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "66.253767ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.29299733s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdany-port3893114819/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712267016787581534" to /tmp/TestFunctionalparallelMountCmdany-port3893114819/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712267016787581534" to /tmp/TestFunctionalparallelMountCmdany-port3893114819/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712267016787581534" to /tmp/TestFunctionalparallelMountCmdany-port3893114819/001/test-1712267016787581534
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.800617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  4 21:43 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  4 21:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  4 21:43 test-1712267016787581534
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh cat /mount-9p/test-1712267016787581534
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-596385 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d75de6b1-b6f0-488e-be61-03c2d16be647] Pending
helpers_test.go:344: "busybox-mount" [d75de6b1-b6f0-488e-be61-03c2d16be647] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d75de6b1-b6f0-488e-be61-03c2d16be647] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d75de6b1-b6f0-488e-be61-03c2d16be647] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003608534s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-596385 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdany-port3893114819/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-596385
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 image save --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-596385 image save --daemon gcr.io/google-containers/addon-resizer:functional-596385 --alsologtostderr: (1.553543011s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-596385
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdspecific-port66414254/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.243886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdspecific-port66414254/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "sudo umount -f /mount-9p": exit status 1 (214.022814ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-596385 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdspecific-port66414254/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T" /mount1: exit status 1 (232.875202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596385 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-596385 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1060739035/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-596385
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-596385
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-596385
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (232.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-454952 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0404 21:44:10.963276   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:44:31.443694   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:45:12.404903   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 21:46:34.325961   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-454952 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m51.659966956s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (232.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-454952 -- rollout status deployment/busybox: (4.524887775s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-8qf48 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-q56fw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-rshl2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-8qf48 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-q56fw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-rshl2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-8qf48 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-q56fw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-rshl2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-8qf48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-8qf48 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-q56fw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-q56fw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-rshl2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-454952 -- exec busybox-7fdf7869d9-rshl2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-454952 -v=7 --alsologtostderr
E0404 21:48:09.142480   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.147808   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.158107   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.178398   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.218749   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.299117   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.459589   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:09.780181   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:10.421286   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:11.702025   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:14.262305   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:19.383176   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 21:48:29.624246   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-454952 -v=7 --alsologtostderr: (45.884156854s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-454952 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E0404 21:48:50.104460   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0404 21:48:50.480646   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp testdata/cp-test.txt ha-454952:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952:/home/docker/cp-test.txt ha-454952-m02:/home/docker/cp-test_ha-454952_ha-454952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test_ha-454952_ha-454952-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952:/home/docker/cp-test.txt ha-454952-m03:/home/docker/cp-test_ha-454952_ha-454952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test_ha-454952_ha-454952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952:/home/docker/cp-test.txt ha-454952-m04:/home/docker/cp-test_ha-454952_ha-454952-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test_ha-454952_ha-454952-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp testdata/cp-test.txt ha-454952-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m02:/home/docker/cp-test.txt ha-454952:/home/docker/cp-test_ha-454952-m02_ha-454952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test_ha-454952-m02_ha-454952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m02:/home/docker/cp-test.txt ha-454952-m03:/home/docker/cp-test_ha-454952-m02_ha-454952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test_ha-454952-m02_ha-454952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m02:/home/docker/cp-test.txt ha-454952-m04:/home/docker/cp-test_ha-454952-m02_ha-454952-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test_ha-454952-m02_ha-454952-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp testdata/cp-test.txt ha-454952-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt ha-454952:/home/docker/cp-test_ha-454952-m03_ha-454952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test_ha-454952-m03_ha-454952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt ha-454952-m02:/home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test_ha-454952-m03_ha-454952-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m03:/home/docker/cp-test.txt ha-454952-m04:/home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test_ha-454952-m03_ha-454952-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp testdata/cp-test.txt ha-454952-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3335929805/001/cp-test_ha-454952-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt ha-454952:/home/docker/cp-test_ha-454952-m04_ha-454952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952 "sudo cat /home/docker/cp-test_ha-454952-m04_ha-454952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt ha-454952-m02:/home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m02 "sudo cat /home/docker/cp-test_ha-454952-m04_ha-454952-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 cp ha-454952-m04:/home/docker/cp-test.txt ha-454952-m03:/home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 ssh -n ha-454952-m03 "sudo cat /home/docker/cp-test_ha-454952-m04_ha-454952-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.517134696s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-454952 node delete m03 -v=7 --alsologtostderr: (16.594619495s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (343.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-454952 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0404 22:03:09.144241   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:03:50.479897   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:04:32.186801   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-454952 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m43.129102863s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (343.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-454952 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-454952 --control-plane -v=7 --alsologtostderr: (1m14.420069085s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-454952 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-407980 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0404 22:08:09.143025   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:08:50.480407   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-407980 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.212956298s)
--- PASS: TestJSONOutput/start/Command (98.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-407980 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-407980 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-407980 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-407980 --output=json --user=testUser: (7.419030575s)
--- PASS: TestJSONOutput/stop/Command (7.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-931822 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-931822 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.232846ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63174f24-25f2-473b-8e43-e7949fb3e715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-931822] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae4d0f1c-4b77-4d0f-a070-6d3b202fc1e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16143"}}
	{"specversion":"1.0","id":"a22cc9b2-c5be-4052-89ab-fa982968e97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5ebbdd10-7570-4a77-a4e8-b073c9d1a27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig"}}
	{"specversion":"1.0","id":"90196659-d439-4a0b-8101-874a23c581d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube"}}
	{"specversion":"1.0","id":"fd1ca4a6-3ed0-4a2b-ad63-5ad8219ae912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2b191d9a-f26c-4e93-a8ba-e28baa355f44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7434a80c-0035-42ce-9f60-a8f1c5779427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-931822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-931822
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (89.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-169283 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-169283 --driver=kvm2  --container-runtime=crio: (43.573342147s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-174313 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-174313 --driver=kvm2  --container-runtime=crio: (43.690929986s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-169283
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-174313
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-174313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-174313
helpers_test.go:175: Cleaning up "first-169283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-169283
--- PASS: TestMinikubeProfile (89.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-869086 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-869086 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.96755478s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-869086 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-869086 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-885030 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-885030 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.660806688s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-869086 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-885030
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-885030: (1.511246415s)
--- PASS: TestMountStart/serial/Stop (1.51s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-885030
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-885030: (23.653090605s)
--- PASS: TestMountStart/serial/RestartStopped (24.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885030 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575162 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0404 22:13:09.142388   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
E0404 22:13:50.479924   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575162 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.063813022s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-575162 -- rollout status deployment/busybox: (5.218432338s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-dlm6j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-t8948 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-dlm6j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-t8948 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-dlm6j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-t8948 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-dlm6j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-dlm6j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-t8948 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575162 -- exec busybox-7fdf7869d9-t8948 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-575162 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-575162 -v 3 --alsologtostderr: (41.158965534s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-575162 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp testdata/cp-test.txt multinode-575162:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162:/home/docker/cp-test.txt multinode-575162-m02:/home/docker/cp-test_multinode-575162_multinode-575162-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test_multinode-575162_multinode-575162-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162:/home/docker/cp-test.txt multinode-575162-m03:/home/docker/cp-test_multinode-575162_multinode-575162-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test_multinode-575162_multinode-575162-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp testdata/cp-test.txt multinode-575162-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt multinode-575162:/home/docker/cp-test_multinode-575162-m02_multinode-575162.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test_multinode-575162-m02_multinode-575162.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m02:/home/docker/cp-test.txt multinode-575162-m03:/home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test_multinode-575162-m02_multinode-575162-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp testdata/cp-test.txt multinode-575162-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2792670073/001/cp-test_multinode-575162-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt multinode-575162:/home/docker/cp-test_multinode-575162-m03_multinode-575162.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162 "sudo cat /home/docker/cp-test_multinode-575162-m03_multinode-575162.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 cp multinode-575162-m03:/home/docker/cp-test.txt multinode-575162-m02:/home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 ssh -n multinode-575162-m02 "sudo cat /home/docker/cp-test_multinode-575162-m03_multinode-575162-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-575162 node stop m03: (2.295973391s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575162 status: exit status 7 (441.961707ms)

                                                
                                                
-- stdout --
	multinode-575162
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-575162-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-575162-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr: exit status 7 (439.808616ms)

                                                
                                                
-- stdout --
	multinode-575162
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-575162-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-575162-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:15:43.764764   36642 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:15:43.764888   36642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:15:43.764897   36642 out.go:304] Setting ErrFile to fd 2...
	I0404 22:15:43.764902   36642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:15:43.765096   36642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:15:43.765291   36642 out.go:298] Setting JSON to false
	I0404 22:15:43.765316   36642 mustload.go:65] Loading cluster: multinode-575162
	I0404 22:15:43.765488   36642 notify.go:220] Checking for updates...
	I0404 22:15:43.765720   36642 config.go:182] Loaded profile config "multinode-575162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:15:43.765737   36642 status.go:255] checking status of multinode-575162 ...
	I0404 22:15:43.766148   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:43.766205   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:43.783298   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	I0404 22:15:43.783805   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:43.784382   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:43.784405   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:43.784773   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:43.784958   36642 main.go:141] libmachine: (multinode-575162) Calling .GetState
	I0404 22:15:43.786796   36642 status.go:330] multinode-575162 host status = "Running" (err=<nil>)
	I0404 22:15:43.786811   36642 host.go:66] Checking if "multinode-575162" exists ...
	I0404 22:15:43.787089   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:43.787125   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:43.802595   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0404 22:15:43.803036   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:43.803526   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:43.803555   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:43.803869   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:43.804144   36642 main.go:141] libmachine: (multinode-575162) Calling .GetIP
	I0404 22:15:43.807445   36642 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:15:43.807960   36642 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:15:43.807991   36642 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:15:43.808190   36642 host.go:66] Checking if "multinode-575162" exists ...
	I0404 22:15:43.808477   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:43.808518   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:43.823493   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0404 22:15:43.823902   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:43.824409   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:43.824429   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:43.824744   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:43.825033   36642 main.go:141] libmachine: (multinode-575162) Calling .DriverName
	I0404 22:15:43.825254   36642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 22:15:43.825291   36642 main.go:141] libmachine: (multinode-575162) Calling .GetSSHHostname
	I0404 22:15:43.827934   36642 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:15:43.828392   36642 main.go:141] libmachine: (multinode-575162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:cc:4f", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:13:08 +0000 UTC Type:0 Mac:52:54:00:d0:cc:4f Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-575162 Clientid:01:52:54:00:d0:cc:4f}
	I0404 22:15:43.828458   36642 main.go:141] libmachine: (multinode-575162) DBG | domain multinode-575162 has defined IP address 192.168.39.203 and MAC address 52:54:00:d0:cc:4f in network mk-multinode-575162
	I0404 22:15:43.828513   36642 main.go:141] libmachine: (multinode-575162) Calling .GetSSHPort
	I0404 22:15:43.828663   36642 main.go:141] libmachine: (multinode-575162) Calling .GetSSHKeyPath
	I0404 22:15:43.828819   36642 main.go:141] libmachine: (multinode-575162) Calling .GetSSHUsername
	I0404 22:15:43.828960   36642 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162/id_rsa Username:docker}
	I0404 22:15:43.912304   36642 ssh_runner.go:195] Run: systemctl --version
	I0404 22:15:43.919737   36642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:15:43.936051   36642 kubeconfig.go:125] found "multinode-575162" server: "https://192.168.39.203:8443"
	I0404 22:15:43.936094   36642 api_server.go:166] Checking apiserver status ...
	I0404 22:15:43.936157   36642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0404 22:15:43.953516   36642 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup
	W0404 22:15:43.965968   36642 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0404 22:15:43.966019   36642 ssh_runner.go:195] Run: ls
	I0404 22:15:43.971126   36642 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0404 22:15:43.975601   36642 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0404 22:15:43.975629   36642 status.go:422] multinode-575162 apiserver status = Running (err=<nil>)
	I0404 22:15:43.975639   36642 status.go:257] multinode-575162 status: &{Name:multinode-575162 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0404 22:15:43.975662   36642 status.go:255] checking status of multinode-575162-m02 ...
	I0404 22:15:43.975948   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:43.975979   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:43.991066   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0404 22:15:43.991567   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:43.992030   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:43.992054   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:43.992411   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:43.992604   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetState
	I0404 22:15:43.994137   36642 status.go:330] multinode-575162-m02 host status = "Running" (err=<nil>)
	I0404 22:15:43.994152   36642 host.go:66] Checking if "multinode-575162-m02" exists ...
	I0404 22:15:43.994460   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:43.994502   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:44.009675   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0404 22:15:44.010067   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:44.010552   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:44.010573   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:44.010909   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:44.011105   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetIP
	I0404 22:15:44.014179   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | domain multinode-575162-m02 has defined MAC address 52:54:00:85:1e:1e in network mk-multinode-575162
	I0404 22:15:44.014623   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:1e:1e", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:14:18 +0000 UTC Type:0 Mac:52:54:00:85:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-575162-m02 Clientid:01:52:54:00:85:1e:1e}
	I0404 22:15:44.014644   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | domain multinode-575162-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:85:1e:1e in network mk-multinode-575162
	I0404 22:15:44.014793   36642 host.go:66] Checking if "multinode-575162-m02" exists ...
	I0404 22:15:44.015071   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:44.015109   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:44.031149   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0404 22:15:44.031584   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:44.031992   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:44.032020   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:44.032368   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:44.032541   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .DriverName
	I0404 22:15:44.032721   36642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0404 22:15:44.032744   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetSSHHostname
	I0404 22:15:44.035296   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | domain multinode-575162-m02 has defined MAC address 52:54:00:85:1e:1e in network mk-multinode-575162
	I0404 22:15:44.035705   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:1e:1e", ip: ""} in network mk-multinode-575162: {Iface:virbr1 ExpiryTime:2024-04-04 23:14:18 +0000 UTC Type:0 Mac:52:54:00:85:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-575162-m02 Clientid:01:52:54:00:85:1e:1e}
	I0404 22:15:44.035733   36642 main.go:141] libmachine: (multinode-575162-m02) DBG | domain multinode-575162-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:85:1e:1e in network mk-multinode-575162
	I0404 22:15:44.035871   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetSSHPort
	I0404 22:15:44.036046   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetSSHKeyPath
	I0404 22:15:44.036229   36642 main.go:141] libmachine: (multinode-575162-m02) Calling .GetSSHUsername
	I0404 22:15:44.036396   36642 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16143-5297/.minikube/machines/multinode-575162-m02/id_rsa Username:docker}
	I0404 22:15:44.115606   36642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0404 22:15:44.131417   36642 status.go:257] multinode-575162-m02 status: &{Name:multinode-575162-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0404 22:15:44.131453   36642 status.go:255] checking status of multinode-575162-m03 ...
	I0404 22:15:44.131748   36642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0404 22:15:44.131785   36642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0404 22:15:44.147172   36642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0404 22:15:44.147568   36642 main.go:141] libmachine: () Calling .GetVersion
	I0404 22:15:44.148089   36642 main.go:141] libmachine: Using API Version  1
	I0404 22:15:44.148114   36642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0404 22:15:44.148445   36642 main.go:141] libmachine: () Calling .GetMachineName
	I0404 22:15:44.148641   36642 main.go:141] libmachine: (multinode-575162-m03) Calling .GetState
	I0404 22:15:44.150367   36642 status.go:330] multinode-575162-m03 host status = "Stopped" (err=<nil>)
	I0404 22:15:44.150380   36642 status.go:343] host is not running, skipping remaining checks
	I0404 22:15:44.150386   36642 status.go:257] multinode-575162-m03 status: &{Name:multinode-575162-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-575162 node start m03 -v=7 --alsologtostderr: (29.1323974s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-575162 node delete m03: (1.867204538s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575162 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575162 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.093378763s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575162 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575162
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575162-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-575162-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.564401ms)

                                                
                                                
-- stdout --
	* [multinode-575162-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-575162-m02' is duplicated with machine name 'multinode-575162-m02' in profile 'multinode-575162'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575162-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575162-m03 --driver=kvm2  --container-runtime=crio: (46.668624821s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-575162
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-575162: exit status 80 (236.850333ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-575162 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-575162-m03 already exists in multinode-575162-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-575162-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.82s)

                                                
                                    
x
+
TestScheduledStopUnix (116.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-889441 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-889441 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.58115334s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-889441 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-889441 -n scheduled-stop-889441
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-889441 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-889441 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-889441 -n scheduled-stop-889441
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-889441
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-889441 --schedule 15s
E0404 22:33:09.146156   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0404 22:33:33.533260   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-889441
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-889441: exit status 7 (73.856717ms)

                                                
                                                
-- stdout --
	scheduled-stop-889441
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-889441 -n scheduled-stop-889441
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-889441 -n scheduled-stop-889441: exit status 7 (74.999486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-889441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-889441
--- PASS: TestScheduledStopUnix (116.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (144.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.791885179 start -p running-upgrade-590730 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.791885179 start -p running-upgrade-590730 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m17.430698043s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-590730 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0404 22:38:09.143183   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-590730 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.281990098s)
helpers_test.go:175: Cleaning up "running-upgrade-590730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-590730
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-590730: (1.169543694s)
--- PASS: TestRunningBinaryUpgrade (144.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-063570 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-063570 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.542216ms)

                                                
                                                
-- stdout --
	* [false-063570] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0404 22:33:49.487252   43793 out.go:291] Setting OutFile to fd 1 ...
	I0404 22:33:49.487397   43793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:33:49.487409   43793 out.go:304] Setting ErrFile to fd 2...
	I0404 22:33:49.487416   43793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0404 22:33:49.487588   43793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16143-5297/.minikube/bin
	I0404 22:33:49.488172   43793 out.go:298] Setting JSON to false
	I0404 22:33:49.489147   43793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4575,"bootTime":1712265455,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0404 22:33:49.489212   43793 start.go:139] virtualization: kvm guest
	I0404 22:33:49.491749   43793 out.go:177] * [false-063570] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0404 22:33:49.493278   43793 out.go:177]   - MINIKUBE_LOCATION=16143
	I0404 22:33:49.493356   43793 notify.go:220] Checking for updates...
	I0404 22:33:49.494867   43793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0404 22:33:49.496423   43793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	I0404 22:33:49.498242   43793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	I0404 22:33:49.499661   43793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0404 22:33:49.501219   43793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0404 22:33:49.503030   43793 config.go:182] Loaded profile config "force-systemd-flag-048599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:33:49.503131   43793 config.go:182] Loaded profile config "kubernetes-upgrade-013199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0404 22:33:49.503215   43793 config.go:182] Loaded profile config "offline-crio-035370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0404 22:33:49.503290   43793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0404 22:33:49.538846   43793 out.go:177] * Using the kvm2 driver based on user configuration
	I0404 22:33:49.540452   43793 start.go:297] selected driver: kvm2
	I0404 22:33:49.540472   43793 start.go:901] validating driver "kvm2" against <nil>
	I0404 22:33:49.540483   43793 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0404 22:33:49.542871   43793 out.go:177] 
	W0404 22:33:49.544112   43793 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0404 22:33:49.545632   43793 out.go:177] 

                                                
                                                
** /stderr **
E0404 22:33:50.480004   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-063570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-063570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-063570"

                                                
                                                
----------------------- debugLogs end: false-063570 [took: 3.673870612s] --------------------------------
helpers_test.go:175: Cleaning up "false-063570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-063570
--- PASS: TestNetworkPlugins/group/false (3.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.582502455 start -p stopped-upgrade-654429 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.582502455 start -p stopped-upgrade-654429 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m16.885122767s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.582502455 -p stopped-upgrade-654429 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.582502455 -p stopped-upgrade-654429 stop: (2.145796457s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-654429 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0404 22:37:52.188292   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-654429 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.565079906s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-654429
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-654429: (1.113762558s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestPause/serial/Start (77.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-661005 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-661005 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.082206654s)
--- PASS: TestPause/serial/Start (77.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.713331ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-450559] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-5297/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-5297/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (68.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-450559 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-450559 --driver=kvm2  --container-runtime=crio: (1m7.883793038s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-450559 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (68.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m1.346386605s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-661005 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-661005 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.276350009s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (69.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m41.460379274s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.561967332s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-450559 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-450559 status -o json: exit status 2 (274.00809ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-450559","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-450559
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-450559: (1.028740461s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.87s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-661005 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-661005 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-661005 --output=json --layout=cluster: exit status 2 (284.635333ms)

                                                
                                                
-- stdout --
	{"Name":"pause-661005","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-661005","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-661005 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-661005 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-661005 --alsologtostderr -v=5: (2.223078347s)
--- PASS: TestPause/serial/PauseAgain (2.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.37s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-661005 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-661005 --alsologtostderr -v=5: (1.367613575s)
--- PASS: TestPause/serial/DeletePaused (1.37s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m11.348812912s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-450559 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.519532742s)
--- PASS: TestNoKubernetes/serial/Start (50.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-whr5s" [1a9b7d67-8fe8-41c9-93e7-c25bfe0134ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.053893101s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2z6g2" [72d9494a-c379-42a7-9098-ba67156745b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2z6g2" [72d9494a-c379-42a7-9098-ba67156745b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005163342s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pm67q" [3183fc0b-cd26-48f6-9c35-5bfb8318d2c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pm67q" [3183fc0b-cd26-48f6-9c35-5bfb8318d2c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005843016s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-450559 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-450559 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.398576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-450559
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-450559: (1.588642178s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-450559 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-450559 --driver=kvm2  --container-runtime=crio: (26.70334276s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.017007464s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (131.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m11.9857709s)
--- PASS: TestNetworkPlugins/group/flannel/Start (131.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w4pgf" [43990c08-2233-487d-8044-2ff9aa1acd49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w4pgf" [43990c08-2233-487d-8044-2ff9aa1acd49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004389136s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-450559 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-450559 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.554141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (136.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m16.618288187s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (136.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (167.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0404 22:43:09.142828   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/functional-596385/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-063570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m47.095343988s)
--- PASS: TestNetworkPlugins/group/bridge/Start (167.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dtxpn" [8e81ad4b-7db7-49da-ab4f-53f076c2ff69] Running
E0404 22:43:50.480435   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007455507s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7x88b" [bce41d09-7e8c-42ee-9458-968c094e82b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7x88b" [bce41d09-7e8c-42ee-9458-968c094e82b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008947501s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dbkmf" [ca39cd5b-c795-451c-9206-fb64e2f2549a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005052184s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5mxlz" [5614280c-ff07-4586-b74b-fdf73988c063] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5mxlz" [5614280c-ff07-4586-b74b-fdf73988c063] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.00479875s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hnjc9" [4309be8e-ad07-4171-adc8-4fcc9c0b99a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hnjc9" [4309be8e-ad07-4171-adc8-4fcc9c0b99a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004191628s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (151.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-024416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-024416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (2m31.189890843s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (151.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (116.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-143118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-143118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m56.239693477s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (116.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-063570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-063570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vckp8" [9b2ba869-0c5c-4ddf-a609-d0b04a890ea3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vckp8" [9b2ba869-0c5c-4ddf-a609-d0b04a890ea3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005262048s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-063570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-063570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-952083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0404 22:46:24.217109   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.222444   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.232774   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.253168   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.293486   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.373869   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.534760   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:24.854993   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:25.495861   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:26.776914   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:29.337987   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:30.506044   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.511347   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.521650   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.541945   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.582231   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.662572   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:30.823073   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:31.143945   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:31.784913   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:33.065205   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:34.459023   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:35.625891   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:40.747040   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:46:44.699437   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/kindnet-063570/client.crt: no such file or directory
E0404 22:46:50.987312   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-952083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m3.040891674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [02819af6-c3fb-4433-a1a6-61cdeb44a1d9] Pending
helpers_test.go:344: "busybox" [02819af6-c3fb-4433-a1a6-61cdeb44a1d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [02819af6-c3fb-4433-a1a6-61cdeb44a1d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004804239s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-952083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-952083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087578165s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-952083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-143118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25] Pending
helpers_test.go:344: "busybox" [ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0404 22:47:08.251107   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.256411   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.266765   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.287200   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.328324   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.409287   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:08.569452   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ad0b43d7-3e6c-4b57-9a18-ee03b01d2a25] Running
E0404 22:47:08.890176   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:09.530939   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:10.811734   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
E0404 22:47:11.468103   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/auto-063570/client.crt: no such file or directory
E0404 22:47:13.372363   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004460876s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-143118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-143118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-143118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-024416 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d670143e-2580-40d9-a69c-b7623a37e199] Pending
helpers_test.go:344: "busybox" [d670143e-2580-40d9-a69c-b7623a37e199] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d670143e-2580-40d9-a69c-b7623a37e199] Running
E0404 22:47:28.732892   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/enable-default-cni-063570/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004450603s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-024416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-024416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-024416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (681.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-952083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0404 22:49:36.908686   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:36.914064   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:36.924373   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:36.944752   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:36.985105   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:49:37.065528   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-952083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (11m21.057313032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-952083 -n default-k8s-diff-port-952083
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (681.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (568.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-143118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0404 22:49:47.150339   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-143118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (9m27.838746817s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-143118 -n embed-certs-143118
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (568.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (586.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-024416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
E0404 22:50:10.595492   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/calico-063570/client.crt: no such file or directory
E0404 22:50:13.534398   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/addons-371778/client.crt: no such file or directory
E0404 22:50:17.871394   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
E0404 22:50:23.599213   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.604501   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.614767   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.635082   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.675427   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.755800   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:23.916247   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:24.237354   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:24.878478   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:26.159028   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:28.720226   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:33.841339   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
E0404 22:50:35.387901   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/flannel-063570/client.crt: no such file or directory
E0404 22:50:44.081519   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/bridge-063570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-024416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (9m46.001612273s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-024416 -n no-preload-024416
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (586.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-343162 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-343162 --alsologtostderr -v=3: (6.329365647s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-343162 -n old-k8s-version-343162: exit status 7 (75.628436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-343162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-037368 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
E0404 23:14:36.908486   12554 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-5297/.minikube/profiles/custom-flannel-063570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-037368 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (57.605332424s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-037368 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-037368 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127632052s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-037368 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-037368 --alsologtostderr -v=3: (11.374570104s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-037368 -n newest-cni-037368
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-037368 -n newest-cni-037368: exit status 7 (75.729054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-037368 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-037368 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-037368 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (36.947676448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-037368 -n newest-cni-037368
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-037368 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-037368 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-037368 -n newest-cni-037368
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-037368 -n newest-cni-037368: exit status 2 (243.715388ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-037368 -n newest-cni-037368
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-037368 -n newest-cni-037368: exit status 2 (248.212877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-037368 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-037368 -n newest-cni-037368
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-037368 -n newest-cni-037368
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.0/binaries 0
25 TestDownloadOnly/v1.30.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 3.35
268 TestNetworkPlugins/group/cilium 3.71
274 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-063570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-063570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-063570"

                                                
                                                
----------------------- debugLogs end: kubenet-063570 [took: 3.19870075s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-063570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-063570
--- SKIP: TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-063570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-063570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-063570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-063570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-063570"

                                                
                                                
----------------------- debugLogs end: cilium-063570 [took: 3.551801755s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-063570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-063570
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-443615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-443615
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard